Refine
Year of publication
- 2020 (157) (remove)
Document Type
- Doctoral Thesis (157) (remove)
Language
- English (157) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Datenassimilation (2)
- Diffusion (2)
- EEG (2)
- Galaktische Archäologie (2)
- Genomik (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (29)
- Institut für Geowissenschaften (23)
- Institut für Chemie (16)
- Institut für Anglistik und Amerikanistik (9)
- Hasso-Plattner-Institut für Digital Engineering GmbH (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (3)
The political legacy of the Martinican poet, novelist and philosopher Édouard Glissant (1928–2011) is the subject of an ongoing debate among postcolonial literary scholars. Responding to an influential view shaping this debate, that Glissant’s work can be categorised into an early political and late apolitical phase, this dissertation claims that this division is based on a narrow conception of 'engaged political writing' that prevents a more comprehensive view of the changing political strategies Glissant pursued throughout his life from emerging. Proceeding from this conceptual basis, the dissertation is concerned with re-reading the dimensions of Glissant's work that have hitherto been relegated as apolitical, literary or poetic, with the aim of conceptualising the politics of relation as an integral part of his overall poetic project. In methodological terms, the dissertation therefore proposes a relational reading of Glissant’s life-work across literary genres, epochs, as well as the conventional divisions between political thought, writing and activism. This perspective is informed by Glissant's philosophy of relation, and draws on a conception of political practice that includes both explicit engagements with established political systems and institutions, as well as literary and cultural interventions geared towards their transformation and the creation of alternatives to them. Theoretically the work thus combines a poststructuralist lens on the conceptual difference between 'politics' and 'the political' with arguments for an inherent political quality of literature, and perspectives from the Afro-Caribbean radical tradition, in which writers and intellectuals have historically sought to combine discursive interventions with organisational actions. Applying this theoretical angle to the analysis of Glissant's politics of relation results in an interdisciplinary research framework designed to explore the synergies between postcolonial political and literary studies.
In order to comprehensively describe Glissant's politics of relation without recourse to evolutionary or digressive models, the concept of an intellectual marronage is proposed as a framework to map the strategies making up Glissant's political archive. Drawing on a variety of historic, political theoretical and literary sources, intellectual marronage is understood as a mode of radical resistance to the neocolonial subjugation for which the plantation system stands historically and metaphorically, as an inherently innovative political practice invested in the creation of communities marked by relational ontologies, and as a commitment to fostering an imagination of the world and the human that differs fundamentally from the Enlightenment paradigm. This specific conception of intellectual marronage forms the basis on which three key strategies that consistently shape Glissant's political practice are identified and mapped. They revolve around Glissant's engagement with history (chapter 2), his commitment to fostering an imagination of the Tout-Monde (whole-world) as a political point of reference (chapter 3), and the continuous exploration of alternative forms of community on the levels of the island, the archipelago and the Tout-Monde (chapter 4). Together these strategies constitute Glissant's personal politics of relation. Its abstract characteristics can be put in a productive conversation with related theoretical traditions invested in exploring the political potentials of fugitivity (chapters 5), as well as with the work of other postcolonial actors whose holistic practice warrants to be described as a politics of relation (chapter 6).
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
Using individual-based modeling to understand grassland diversity and resilience in the Anthropocene
(2020)
The world’s grassland systems are increasingly threatened by anthropogenic change. Susceptible to a variety of different stressors, from land-use intensification to climate change, understanding the mechanisms driving the maintenance of these systems’ biodiversity and stability, and how these mechanisms may shift under human-mediated disturbance, is thus critical for successfully navigating the next century. Within this dissertation, I use an individual-based and spatially-explicit model of grassland community assembly (IBC-grass) to examine several processes, thought key to understanding their biodiversity and stability and how it changes under stress. In the first chapter of my thesis, I examine the conditions under which intraspecific trait variation influences the diversity of simulated grassland communities. In the second and third chapters of my thesis, I shift focus towards understanding how belowground herbivores influence the stability of these grassland systems to either a disturbance that results in increased, stochastic, plant mortality, or eutrophication.
Intraspecific trait variation (ITV), or variation in trait values between individuals of the same species, is fundamental to the structure of ecological communities. However, because it has historically been difficult to incorporate into theoretical and statistical models, it has remained largely overlooked in community-level analyses. This reality is quickly shifting, however, as a consensus of research suggests that it may compose a sizeable proportion of the total variation within an ecological community and that it may play a critical role in determining if species coexist. Despite this increasing awareness that ITV matters, there is little consensus of the magnitude and direction of its influence. Therefore, to better understand how ITV changes the assembly of grassland communities, in the first chapter of my thesis, I incorporate it into an established, individual-based grassland community model, simulating both pairwise invasion experiments as well as the assembly of communities with varying initial diversities. By varying the amount of ITV in these species’ functional traits, I examine the magnitude and direction of ITV’s influence on pairwise invasibility and community coexistence. During pairwise invasion, ITV enables the weakest species to more frequently invade the competitively superior species, however, this influence does not generally scale to the community level. Indeed, unless the community has low alpha- and beta- diversity, there will be little effect of ITV in bolstering diversity. In these situations, since the trait axis is sparsely filled, the competitively inferior may suffer less competition and therefore ITV may buffer the persistence and abundance of these species for some time.
In the second and third chapters of my thesis, I model how one of the most ubiquitous trophic interactions within grasslands, herbivory belowground, influences their diversity and stability. Until recently, the fundamental difficulty in studying a process within the soil has left belowground herbivory “out of sight, out of mind.” This dilemma presents an opportunity for simulation models to explore how this understudied process may alter community dynamics. In the second chapter of my thesis, I implement belowground herbivory – represented by the weekly removal of plant biomass – into IBC-grass. Then, by introducing a pulse disturbance, modelled as the stochastic mortality of some percentage of the plant community, I observe how the presence of belowground herbivores influences the resistance and recovery of Shannon diversity in these communities. I find that high resource, low diversity, communities are significantly more destabilized by the presence of belowground herbivores after disturbance. Depending on the timing of the disturbance and whether the grassland’s seed bank persists for more than one season, the impact of the disturbance – and subsequently the influence of the herbivores – can be greatly reduced. However, because human-mediated eutrophication increases the amount of resources in the soil, thus pressuring grassland systems, our results suggest that the influence of these herbivores may become more important over time.
In the third chapter of my thesis, I delve further into understanding the mechanistic underpinnings of belowground herbivores on the diversity of grasslands by replicating an empirical mesocosm experiment that crosses the presence of herbivores above- and below-ground with eutrophication. I show that while aboveground herbivory, as predicted by theory and frequently observed in experiments, mitigates the impact of eutrophication on species diversity, belowground herbivores counterintuitively reduce biodiversity. Indeed, this influence positively interacts with the eutrophication process, amplifying its negative impact on diversity. I discovered the mechanism underlying this surprising pattern to be that, as the herbivores consume roots, they increase the proportion of root resources to root biomass. Because root competition is often symmetric, herbivory fails to mitigate any asymmetries in the plants’ competitive dynamics. However, since the remaining roots have more abundant access to resources, the plants’ competition shifts aboveground, towards asymmetric competition for light. This leads the community towards a low-diversity state, composed of mostly high-performance, large plant species. We further argue that this pattern will emerge unless the plants’ root competition is asymmetric, in which case, like its counterpart aboveground, belowground herbivory may buffer diversity by reducing this asymmetry between the competitively superior and inferior plants.
I conclude my dissertation by discussing the implications of my research on the state of the art in intraspecific trait variation and belowground herbivory, with emphasis on the necessity of more diverse theory development in the study of these fundamental interactions. My results suggest that the influence of these processes on the biodiversity and stability of grassland systems is underappreciated and multidimensional, and must be thoroughly explored if researchers wish to predict how the world’s grasslands will respond to anthropogenic change. Further, should researchers myopically focus on understanding central ecological interactions through only mathematically tractable analyses, they may miss entire suites of potential coexistence mechanisms that can increase the coviability of species, potentially leading to coexistence over ecologically-significant timespans. Individual-based modelling, therefore, with its focus on individual interactions, will prove a critical tool in the coming decades for understanding how local interactions scale to larger contexts, and how these interactions shape ecological communities and further predicting how these systems will change under human-mediated stress.
Addressing both scholars of international law and political science as well as decision makers involved in cybersecurity policy, the book tackles the most important and intricate legal issues that a state faces when considering a reaction to a malicious cyber operation conducted by an adversarial state. While often invoked in political debates and widely analysed in international legal scholarship, self-defence and countermeasures will often remain unavailable to states in situations of cyber emergency due to the pervasive problem of reliable and timely attribution of cyber operations to state actors. Analysing the legal questions surrounding attribution in detail, the book presents the necessity defence as an evidently available alternative. However, the shortcomings of the doctrine as based in customary international law that render it problematic as a remedy for states are examined in-depth. In light of this, the book concludes by outlining a special emergency regime for cyberspace.
TrainTrap
(2020)
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
Ammonia is a chemical of fundamental importance for nature`s vital nitrogen cycle. It is crucial for the growth of living organisms as well as food and energy source. Traditionally, industrial ammonia production is predominated by Haber- Bosch process (HBP) which is based on direct conversion of N2 and H2 gas under high temperature and high pressure (~500oC, 150-300 bar). However, it is not the favorite route because of its thermodynamic and kinetic limitations, and the need for the energy intense production of hydrogen gas by reforming processes. All these disfavors of HBP open a target to search for an alternative technique to perform efficient ammonia synthesis via electrochemical catalytic processes, in particular via water electrolysis, using water as the hydrogen source to save the process from gas reforming.
In this study, the investigation of the interface effects between imidazolium-based ionic liquids and the surface of porous carbon materials with a special interest in the nitrogen absorption capability. As the further step, the possibility to establish this interface as the catalytically active area for the electrochemical N2 reduction to NH3 has been evaluated. This particular combination has been chosen because the porous carbon materials and ionic liquids (IL) have a significant importance in many scientific fields including catalysis and electrocatalysis due to their special structural and physicochemical properties. Primarily, the effects of the confinement of ionic liquid (EmimOAc, 1-Ethyl-3-methylimidazolium acetate) into carbon pores have been investigated. The salt-templated porous carbons, which have different porosity (microporous and mesoporous) and nitrogen species, were used as model structures for the comparison of the IL confinement at different loadings. The nitrogen uptake of EmimOAc can be increased by about 10 times by the confinement in the pores of carbon materials compared to the bulk form. In addition, the most improved nitrogen absorption was observed by IL confinement in micropores and in nitrogen-doped carbon materials as a consequence of the maximized structural changes of IL. Furthermore, the possible use of such interfaces between EmimOAc and porous carbon for the catalytic activation of dinitrogen during the kinetically challenging NRR due to the limited gas absorption in the electrolyte, was examined. An electrocatalytic NRR system based on the conversion of water and nitrogen gas to ammonia at ambient operation conditions (1 bar, 25 °C) was performed in a setup under an applied electric potential with a single chamber electrochemical cell, which consists of the combination of EmimOAc electrolyte with the porous carbon-working electrode and without a traditional electrocatalyst. Under a potential of -3 V vs. SCE for 45 minutes, a NH3 production rate of 498.37 μg h-1 cm-2 and FE of 12.14% were achieved. The experimental observations show that an electric double-layer, which serves the catalytically active area, occurs between a microporous carbon material and ions of the EmimOAc electrolyte in the presence of sufficiently high provided electric potential. Comparing with the typical NRR systems which have been reported in the literature, the presented electrochemical ammonia synthesis approach provides a significantly higher ammonia production rate with a chance to avoid the possible kinetic limitations of NRR. In terms of operating conditions, ammonia production rate and the faradic efficiency without the need for any synthetic electrocatalyst can be resulted of electrocatalytic activation of nitrogen in the double-layer formed between carbon and IL ions.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
‘The Territorialities of U.S. Imperialisms’ sets into relation U.S. imperial and Indigenous conceptions of territoriality as articulated in U.S. legal texts and Indigenous life writing in the 19th century. It analyzes the ways in which U.S. legal texts as “legal fictions” narratively press to affirm the United States’ territorial sovereignty and coherence in spite of its reliance on a variety of imperial practices that flexibly disconnect and (re)connect U.S. sovereignty, jurisdiction and territory.
At the same time, the book acknowledges Indigenous life writing as legal texts in their own right and with full juridical force, which aim to highlight the heterogeneity of U.S. national territory both from their individual perspectives and in conversation with these legal fictions. Through this, the book’s analysis contributes to a more nuanced understanding of the coloniality of U.S. legal fictions, while highlighting territoriality as a key concept in the fashioning of the narrative of U.S. imperialism.
The Southern Central Andes (33°-36°S) are an excellent natural laboratory to study orogenic deformation processes, where boundary conditions, such as the geometry of the subducted plate, impose an important control on the evolution of the orogen. On the other hand, the South American plate presents a series of heterogeneities that additionally impart control on the mode of deformation. This thesis aims to test the control of this last factor over the construction of the Cenozoic Andean orogenic system.
From the integration of surface and subsurface information in the southern area (34-36°S), the evolution of Andean deformation over the steeply dipping subduction segment was studied. A structural model was developed evaluating the stress state from the Miocene to the present-day and its influence in the migration of magmatic fluids and hydrocarbons. Based on these data, together with the data generated by other researchers in the northern zone of the study area (33-34°S), geodynamic numerical modeling was performed to test the hypothesis of the decisive role of upper-plate heterogeneities in the Andean evolution. Geodynamic codes (LAPEX-2D and ASPECT) which simulate the behavior of materials with elasto-visco-plastic rheologies under deformation, were used. The model results suggest that upper-plate contractional deformation is significantly controlled by the strength of the lithosphere, which is defined by the composition of the upper and lower crust, and by the proportion of lithospheric mantle, which in turn is determined by previous tectonic events. In addition, the previous regional tectono-magmatic events also defined the composition of the crust and its geometry, which is another factor that controls the localization of deformation. Accordingly, with more felsic lower crustal composition, the deformation follows a pure-shear mode, while more mafic compositions induce a simple-shear deformation mode. On the other hand, it was observed that initial lithospheric thickness may fundamentally control the location of deformation, with zones characterized by thin lithosphere are prone to concentrate it. Finally, it was found that an asymmetric lithosphere-astenosphere boundary resulting from corner flow in the mantle wedge of the eastward-directed subduction zone tends to generate east-vergent detachments.
Most reading theories assume that readers aim at word centers for optimal information processing. During reading, saccade targeting turns out to be imprecise: Saccades’ initial landing positions often miss the word centers and have high variance, with an additional systematic error that is modulated by the distance from the launch site to the center of the target word. The performance of the oculomotor system, as reflected in the statistics of within-word landing positions, turns out to be very robust and mostly affected by the spatial information during reading. Hence, it is assumed that the saccade generation is highly automated.
The main goal of this thesis is to explore the performance of the oculomotor system under various reading conditions where orthographic information and the reading direction were manipulated. Additionally, the challenges in understanding the eye movement data to represent the oculomotor process during reading are addressed.
Two experimental studies and one simulation study were conducted for this thesis, which resulted in the following main findings:
(i) Reading texts with orthographic manipulations leads to specific changes in the eye movement patterns, both in temporal and spatial measures. The findings indicate that the oculomotor control of eye movements during reading is dependent on reading conditions (Chapter 2 & 3).
(ii) Saccades’ accuracy and precision can be simultaneously modulated under reversed reading condition, supporting the assumption that the random and systematic oculomotor errors are not independent. By assuming that readers increase the precision of sensory observation while maintaining the learned prior knowledge when reading direction was reversed, a process-oriented Bayesian model for saccade targeting can account for the simultaneous reduction of oculomotor errors (Chapter 2).
(iii) Plausible parameter values serving as proxies for the intended within-word landing positions can be estimated by using the maximum a posteriori estimator from Bayesian inference. Using the mean value of all observations as proxies is insufficient for studies focusing on the launch-site effect because the method exhibits the strongest bias when estimating the size of the effect. Mislocated fixations remain a challenge for the currently known estimation methods, especially when the systematic oculomotor error is large (Chapter 4).
The results reported in this thesis highlight the role of the oculomotor system, together with underlying cognitive processes, in eye movements during reading. The modulation of oculomotor control can be captured through a precise analysis of landing positions.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Plants are an attractive platform for the production of medicinal compounds because of their potential to generate large amounts of biomass cheaply. The use of chloroplast transformation is an attractive way to achieve the recombinant production of proteins in plants, because of the chloroplasts’ high capacity to produce foreign proteins in comparison to nuclear transformed plants. In this thesis, the production of two different types of antimicrobial polypeptides in chloroplasts is explored.
The first example is the production of the potent HIV entry inhibitor griffithsin. Griffithsin has the potential to prevent HIV infections by blocking the entry of the virus into human cells. Here the use of transplastomic plants as an inexpensive production method for griffithsin was explored. Transplastomic plants grew healthily and were able to accumulate griffithsin to up to 5% of the total soluble protein. Griffithsin could easily be purified from tobacco leaf tissue and had a similarly high neutralization activity as griffithsin recombinantly produced in bacteria. Griffithsin could be purified from dried tobacco leaves, demonstrating that dried leaves could be used as a storable starting material for griffithsin purification, circumventing the need for immediate purification after harvest.
The second example is the production of antimicrobial peptides (AMPs) that have the capacity to kill bacteria and are an attractive alternative to currently used antibiotics that are increasingly becoming ineffective. The production of antimicrobial peptides was considerably more challenging than the production of griffithsin. Small AMPs are prone to degradation in plastids. This problem was overcome by fusing AMPs to generate larger polypeptides. In one approach, AMPs were fused to each other to increase size and combine the mode of action of multiple AMPs. This improved the accumulation of AMPs but also resulted in impaired plant growth. This was solved by the use of two different inducible systems, which could largely restore plant growth. Fusions of multiple AMPs were insoluble and could not be purified.
In addition to fusing AMPs to each other, the fusion of AMPs to small ubiquitin-like modifier (SUMO), was tested as an approach to improve the accumulation, facilitate purification, and reduce the toxicity of AMPs to chloroplasts. Fusion of AMPs to SUMO indeed increased accumulation while reducing the toxicity to the plants. SUMO fusions produced inside chloroplasts could be purified, and SUMO could be efficiently cleaved off with the SUMO protease. Such fusions therefore provide a promising strategy for the production of AMPs and other small polypeptides inside chloroplasts.
One of the tremendous discoveries by the Cassini spacecraft has been the detection of propeller structures in Saturn's A ring. Although the generating moonlet is too small to be resolved by the cameras aboard Cassini, its produced density structure within the rings, caused by its gravity can be well observed. The largest observed propeller is called Blériot and has an azimuthal extent over several thousand kilometers. Thanks to its large size, Blériot could be identified in different images over a time span of over 10 years, allowing the reconstruction of its orbital evolution. It turns out that Blériot deviates considerably from its expected Keplerian orbit in azimuthal direction by several thousand kilometers. This excess motion can be well reconstructed by a superposition of three harmonics, and therefore resembles the typical fingerprint of a resonantly perturbed body. This PhD thesis is directed to the excess motion of Blériot. Resonant perturbations are a known for some of the outer satellites of Saturn. Thus, in the first part of this thesis, we seek for suiting resonance candidates nearby the propeller, which might explain the observed periods and amplitudes. In numeric simulations, we show that indeed resonances by Prometheus, Pandora and Mimas can explain the libration periods in good agreement, but not the amplitudes. The amplitude problem is solved by the introduction of a propeller-moonlet interaction model, where we assume a broken symmetry of the propeller by a small displacement of the moonlet. This results in a librating motion the moonlet around the propeller's symmetry center due to the non-vanishing accelerations. The retardation of the reaction of the propeller structure to the motion of the moonlet causes the propeller to become asymmetric. Hydrodynamic simulations to test our analytical model confirm our predictions. In the second part of this thesis, we consider a stochastic migration of the moonlet, which is an alternative hypothesis to explain the observed excess motion of Blériot. The mean-longitude is a time-integrated quantity and thus introduces a correlation between the independent kicks of a random walk, smoothing the noise and thus makes the residual look similar to the observed one for Blériot. We apply a diagonalization test to decorrelated the observed residuals for the propellers Blériot and Earhart and the ring-moon Daphnis. It turns out that the decorrelated distributions do not strictly follow the expected Gaussian distribution. The decorrelation method fails to distinguish a correlated random walk from a noisy libration and thus we provide an alternative study. Assuming the three-harmonic fit to be a valid representation of the excess motion for Blériot, independently from its origin, we test the likelihood that this excess motion can be created by a random walk. It turns out that a non-correlated and correlated random walk is unlikely to explain the observed excess motion.
The field of gamma-ray astronomy opened a new window into the non-thermal universe that allows studying the acceleration sites of cosmic rays and the role of cosmic rays on evolutionary processes in galaxies. The detection of almost one hundred Galactic very-high-energy (VHE: 0.1−100TeV) gamma-ray sources in the Milky Way demonstrates that particle acceleration up to tens of TeV energies is a common phenomenon. Furthermore, the detection of VHE gamma rays from other galaxies has confirmed that cosmic rays are not exclusively accelerated in the Milky Way. The rapid development of gamma-ray astronomy in the past two decades has led to a transition from the detection and study of individual sources to source population studies. To answer the question, whether the VHE gamma-ray source population of the Milky Way is unique, observations of galaxies, for which individual sources can be resolved, are required. Such galaxies are the Magellanic Clouds, two satellite galaxies of the Milky Way, which have been surveyed by the H.E.S.S. experiment in the last decade. In this thesis, data from a total of 450 hours of H.E.S.S. observations towards the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are presented. During the analysis of the data sets, special emphasis is put on the evaluation of systematic uncertainties of the experiment in order to assure an unbiased flux estimation of the potential VHE gamma-ray sources of the Magellanic Clouds. A detailed analysis of the survey data revealed the detection of the gamma-ray binary LMCP3, the most powerful gamma-ray binary known so far, that is located in the LMC, and thus, increases the number of known VHE gamma-ray sources in the LMC to four. No other VHE gamma-ray source is detected in the Magellanic Clouds and integral flux upper limits are estimated. These flux upper limits are used to perform a source population study based on known VHE source classes and existing multi-wavelength catalogues. A comparison of the source populations of the Magellanic Clouds and the Milky Way revealed that no other source in the Magellanic Clouds is as bright as the most luminous VHE gamma-ray source in the LMC: the pulsar wind nebula N 157B, and that one-third of the source population of the Magellanic Clouds is less luminous than the other known VHE gamma-ray sources in the LMC. For only a couple of sources luminosity levels of Galactic VHE sources, that are more than one order of magnitude fainter than the detected sources in the LMC, are constrained. Based on the flux upper limits, differences on the TeV source populations in the Magellanic Clouds and the Milky Way as well as the importance of the source environments will be discussed.
Organizations incorporate the institutional demands from their environment in order to be deemed legitimate and survive. Yet, complexifying societies promulgate multiple and sometimes inconsistent institutional prescriptions. When these prescriptions collide, organizations are said to face “institutional complexity”. How does an organization then incorporate incompatible demands? What are the consequences of institutional complexity for an organization? The literature provides contradictory conceptual and empirical insights on the matter. A central assumption, however, remains that internal incompatibilities generate tensions that, under certain conditions, can escalate into intractable conflicts, resulting in dysfunctionality and loss of legitimacy. The present research is an inquiry into what happens inside an organization when it incorporates complex institutional demands.
To answer this question, I focus on how individuals inside an organization interpret a complex institutional prescription. I examine how members of the French Development Agency interpret ‘results-based management’, a central but complex concept of organizing in the field of development aid. I use an inductive mixed methods design to systematically explore how different interpretations of results-based management relate to one another and to the organizational context in which they are embedded.
The results reveal that results-based management is a contested concept in the French Development Agency. I find multiple interpretations of the concept, which are attached to partly incompatible rationales about “who we are” and “what we do as an organization”. These rationales nevertheless coexist as balanced forces, without escalating into open conflict. The analysis points to four reasons for this peaceful coexistence of diverging rationales inside one and the same organization: 1) individuals’ capacity to manipulate different interpretations of a complex institutional demand, 2) the nature of interpretations, which makes them more or less prone to conflict, 3) the balanced distribution of rationales across the organizational sub-contexts and 4) the shared rules of interpretation provided by the larger socio-cultural context.
This research shows that an organization that incorporates institutional complexity comes to represent different, partly incompatible things to its members without being at war with itself. In doing so, it contributes to our knowledge of institutional complexity and organizational hybridity. It also advances our understanding of internal organizational legitimacy and of the translation of managerial concepts in organizations.
This cumulative thesis is concerned with the evolution of geomagnetic activity since the beginning of the 20th century, that is, the time-dependent response of the geomagnetic field to solar forcing. The focus lies on the description of the magnetospheric response field at ground level, which is particularly sensitive to the ring current system, and an interpretation of its variability in terms of the solar wind driving. Thereby, this work contributes to a comprehensive understanding of long-term solar-terrestrial interactions.
The common basis of the presented publications is formed by a reanalysis of vector magnetic field measurements from geomagnetic observatories located at low and middle geomagnetic latitudes. In the first two studies, new ring current targeting geomagnetic activity indices are derived, the Annual and Hourly Magnetospheric Currents indices (A/HMC). Compared to existing indices (e.g., the Dst index), they do not only extend the covered period by at least three solar cycles but also constitute a qualitative improvement concerning the absolute index level and the ~11-year solar cycle variability. The analysis of A/HMC shows that (a) the annual geomagnetic activity experiences an interval-dependent trend with an overall linear decline during 1900–2010 of ~5 % (b) the average trend-free activity level amounts to ~28 nT (c) the solar cycle related variability shows amplitudes of ~15–45 nT (d) the activity level for geomagnetically quiet conditions (Kp<2) lies slightly below 20 nT. The plausibility of the last three points is ensured by comparison to independent estimations either based on magnetic field measurements from LEO satellite missions (since the 1990s) or the modeling of geomagnetic activity from solar wind input (since the 1960s). An independent validation of the longterm trend is problematic mainly because the sensitivity of the locally measured geomagnetic activity depends on geomagnetic latitude. Consequently, A/HMC is neither directly comparable to global geomagnetic activity indices (e.g., the aa index) nor to the partly reconstructed open solar magnetic flux, which requires a homogeneous response of the ground-based measurements to the interplanetary magnetic field and the solar wind speed.
The last study combines a consistent, HMC-based identification of geomagnetic storms from 1930–2015 with an analysis of the corresponding spatial (magnetic local time-dependent) disturbance patterns. Amongst others, the disturbances at dawn and dusk, particularly their evolution during the storm recovery phases, are shown to be indicative of the solar wind driving structure (Interplanetary Coronal Mass Ejections vs. Stream or Co-rotating Interaction Regions), which enables a backward-prediction of the storm driver classes. The results indicate that ICME-driven geomagnetic storms have decreased since 1930 which is consistent with the concurrent decrease of HMC. Out of the collection of compiled follow-up studies the inclusion of measurements from high-latitude geomagnetic observatories into the third study’s framework seems most promising at this point.
It has frequently been observed that single emotional events are not only more efficiently processed, but also better remembered, and form longer-lasting memory traces than neutral material. However, when emotional information is perceived as a part of a complex event, such as in the context of or in relation to other events and/or source details, the modulatory effects of emotion are less clear. The present work aims to investigate how emotional, contextual source information modulates the initial encoding and subsequent long-term retrieval of associated neutral material (item memory) and contextual source details (contextual source memory). To do so, a two-task experiment was used, consisting of an incidental encoding task in which neutral objects were displayed over different contextual background scenes which varied in emotional content (unpleasant, pleasant, and neutral), and a delayed retrieval task (1 week), in which previously-encoded objects and new ones were presented. In a series of studies, behavioral indices (Studies 2, 3, and 5), event-related potentials (ERPs; Studies 1-4), and functional magnetic resonance imaging (Study 5) were used to investigate whether emotional contexts can rapidly tune the visual processing of associated neutral information (Study 1) and modulate long-term item memory (Study 2), how different recognition memory processes (familiarity vs. recollection) contribute to these emotion effects on item and contextual source memory (Study 3), whether the emotional effects of item memory can also be observed during spontaneous retrieval (Sstudy 4), and which brain regions underpin the modulatory effects of emotional contexts on item and contextual source memory (Study 5). In Study 1, it was observed that emotional contexts by means of emotional associative learning, can rapidly alter the processing of associated neutral information. Neutral items associated with emotional contexts (i.e. emotional associates) compared to neutral ones, showed enhanced perceptual and more elaborate processing after one single pairing, as indexed by larger amplitudes in the P100 and LPP components, respectively. Study 2 showed that emotional contexts produce longer-lasting memory effects, as evidenced by better item memory performance and larger ERP Old/New differences for emotional associates. In Study 3, a mnemonic differentiation was observed between item and contextual source memory which was modulated by emotion. Item memory was driven by familiarity, independently of emotional contexts during encoding, whereas contextual source memory was driven by recollection, and better for emotional material. As in Study 2, enhancing effects of emotional contexts for item memory were observed in ERPs associated with recollection processes. Likewise, for contextual source memory, a pronounced recollection-related ERP enhancement was observed for exclusively emotional contexts. Study 4 showed that the long-term recollection enhancement of emotional contexts on item memory can be observed even when retrieval is not explicitly attempted, as measured with ERPs, suggesting that the emotion enhancing effects on memory are not related to the task embedded during recognition, but to the motivational relevance of the triggering event. In Study 5, it was observed that enhancing effects of emotional contexts on item and contextual source memory involve stronger engagement of the brain's regions which are associated with memory recollection, including areas of the medial temporal lobe, posterior parietal cortex, and prefrontal cortex.
Taken together, these findings suggest that emotional contexts rapidly modulate the initial processing of associated neutral information and the subsequent, long-term item and contextual source memories. The enhanced memory effects of emotional contexts are strongly supported by recollection rather than familiarity processes, and are shown to be triggered when retrieval is both explicitly and spontaneously attempted. These results provide new insights into the modulatory role of emotional information on the visual processing and the long-term recognition memory of complex events. The present findings are integrated into the current theoretical models and future ventures are discussed.
With populations growing worldwide and climate change threatening food production there is an urgent need to find ways to ensure food security. Increasing carbon fixation rate in plants is a promising approach to boost crop yields. The carbon-fixing enzyme Rubisco catalyzes, beside the carboxylation reaction, also an oxygenation reaction that generates glycolate-2P, which needs to be recycled via a metabolic route termed photorespiration. Photorespiration dissipates energy and most importantly releases previously fixed CO2, thus significantly lowering carbon fixation rate and yield. Engineering plants to omit photorespiratory CO2 release is the goal of the FutureAgriculture consortium and this thesis is part of this collaboration. The consortium aims to establish alternative glycolate-2P recycling routes that do not release CO2. Ultimately, they are expected to increase carbon fixation rates and crop yields. Natural and novel reactions, which require enzyme engineering, were considered in the pathway design process. Here I describe the engineering of two pathways, the arabinose-5P and the erythrulose shunt. They were designed to recycle glycolate-2P via glycolaldehyde into a sugar phosphate and thereby reassimilate glycolate-2P to the Calvin cycle. I used Escherichia coli gene deletion strains to validate and characterize the activity of both synthetic shunts. The strains’ auxotrophies can be alleviated by the activity of the synthetic route, thus providing a direct way to select for pathway activity. I introduced all pathway components to these dedicated selection strains and discovered inhibitions, limitations and metabolic cross talk interfering with pathway activity. After resolving these issues, I was able to show the in vivo activity of all pathway components and combine them into functional modules.. Specifically, I demonstrate the activity of a new-to-nature module of glycolate reduction to glycolaldehyde. Also, I successfully show a new glycolaldehyde assimilation route via arabinose-5P to ribulose-5P. In addition, all necessary enzymes for glycolaldehyde assimilation via L-erythrulose were shown to be active and an L-threitol assimilation route via L-erythrulose was established in E. coli. On their own, these findings demonstrate the power of using an easily engineerable microbe to test novel pathways; combined, they will form the basis for implementing photorespiration bypasses in plants.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
This thesis is focused on a better understanding of the formation mechanism of bulk birefringence gratings (BBG) and a surface relief gratings (SRG) in photo-sensitive polymer films. A new set-up is developed enabling the in situ investigation how the polymer film is being structured during irradiation with modulated light. The new aspect of the equipment is that it combines several techniques such as a diffraction efficiency (DE) set-up, an atomic force microscope (AFM) and an optical set-up for controlled illumination of the sample. This enables the simultaneous acquiring and differentiation of both gratings (BBG and SRG), while changing the irradiation conditions in desired way.
The dissertation is based on five publications. The first publication (I) is focused on the description of the set-up and interpretation of the measured data. A fine structure within the 1st-order diffraction spot is observed, which is a result of the inhomogeneity of the inscribed gratings.
In the second publication (II) the interplay of BBG and SRG in the DE is discussed. It has been found, that, dependent on the polarization of a weak probe beam, the diffraction components of the SRG and BBG either interfere constructively or destructively in the DE, altering the appearance of the intensity distribution within the diffracted spot.
The third (III) and fourth (IV) publications describe the light-induced reconfiguration of surface structures. Special attention is payed to conditions influencing the erasure of topography and bulk gratings. This can be achieved via thermal treatment or illumination of the polymer film. Using the translation of the interference pattern (IP) in a controlled way, the optical erase speed is significantly increased. Additionally, a dynamic reconfigurable surface is generated, which could move surface attached objects by the continuous translation of the interference pattern during irradiation of the polymer films.
The fifth publication (V) deals with the understanding of polymer deformation under irradiation with SP-IP, which is the only IP generating a half-period topography grating (compared to the period of the IP) on the photo-sensitive polymer film. This mechanism is used, e.g. to generate a SRG below the diffraction limit of light. It also represents an easy way of changing the period of the surface grating just by a small change in polarization angle of the interfering beams without adjusting the optical pass of the two beams. Additionally, complex surface gratings formed in mixed polarization- and intensity interference patterns are shown.
I J. Jelken, C. Henkel and S. Santer, Applied Physics B, 125 (2019), 218
II J. Jelken, C. Henkel and S. Santer, Appl. Phys. Lett., 116 (2020), 051601
III J. Jelken and S. Santer, RSC Advances, 9 (2019), 20295
IV J. Jelken, M. Brinkjans, C. Henkel and S. Santer, SPIE Proceedings, 11367 (2020), 1136710
V J. Jelken, C. Henkel and S. Santer, Formation of Half-Period Surface Relief Gratings in Azobenzene Containing Polymer Films (submitted to Applied Physics B)
The ability of a company to innovate and to launch innovation is a critical competitive edge to remain competitive in the 21st century. Large organizations therefore increasingly recognize employees as a significant factor and critical source of innovation. Several studies assert the fact that every employee has to offer certain skills and knowledge and can contribute to innovation. Hence, every employee has a certain ‘entrepreneurial potential’. This potential can be expressed in the form of entrepreneurial behaviour and can occur in many ways, from monopersonal innovation championing to several small scale contributions, where several individuals team up for innovation. To support entrepreneurial behaviour of their employees, large organizations increasingly rely on Corporate Entrepreneurship. They set up organizational structures and venturing units, offer vehicles and tools to their employees to be more entrepreneurial. The evolvement of new tools and technologies thereby allow for new ways of employee involvement, also allowing for more radical innovation to be developed collaboratively. Yet, many of such offerings fail to achieve the desired outcome. While some employees immediately opt-in for innovation, others do not and their entrepreneurial potential remains untapped. This research explores how large organizations can better support their employees to express their entrepreneurial potential, thus moving from non-entrepreneurial behaviour or not wanting to be involved, to actually expressing entrepreneurial behaviour. The underlying research therefore is two-fold. While focusing on the individual level and the entrepreneurial behaviour of employees, this research also takes the organizational perspective into account in order to identify how non-entrepreneurial behaviour can be stimulated towards entrepreneurial behaviour. Using an empirical qualitative research design based on pragmatism and abduction, data is collected by means of qualitative interviews as well as a longitudinal use case setting. Grounded theory is then applied for analysis and sense making. The main outcome is a theoretical model of why employees are expressing or not expressing their entrepreneurial potential and how non-expression can potentially be triggered towards entrepreneurial behaviour. The results indicate that there is no one-size-fits all model of Corporate Entrepreneurship. This research therefore argues that organizations can achieve higher levels of entrepreneurial behaviour when addressing employees differently. By developing a theoretical model as well as suggestions of how this model can be applied in practice, this research contributes to theory and practice alike. This document closes suggesting future research areas around supporting employees to express their entrepreneurial potential.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
Water quality in river systems is of growing concern due to rising anthropogenic pressures and climate change. Mitigation efforts have been placed under the guidelines of different governance conventions during last decades (e.g., the Water Framework Directive in Europe). Despite significant improvement through relatively straightforward measures, the environmental status has likely reached a plateau. A higher spatiotemporal accuracy of catchment nitrate modeling is, therefore, needed to identify critical source areas of diffuse nutrient pollution (especially for nitrate) and to further guide implementation of spatially differentiated, cost-effective mitigation measures. On the other hand, the emerging high-frequency sensor monitoring upgrades the monitoring resolution to the time scales of biogeochemical processes and enables more flexible monitoring deployments under varying conditions. The newly available information offers new prospects in understanding nitrate spatiotemporal dynamics. Formulating such advanced process understanding into catchment models is critical for model further development and environmental status evaluation. This dissertation is targeting on a comprehensive analysis of catchment and in-stream nitrate dynamics and is aiming to derive new insights into their spatial and temporal variabilities through the new fully distributed model development and the new high-frequency data.
Firstly, a new fully distributed, process-based catchment nitrate model (the mHM-Nitrate model) is developed based on the mesoscale Hydrological Model (mHM) platform. Nitrate process descriptions are adopted from the Hydrological Predictions for the Environment (HYPE), with considerable improved implementations. With the multiscale grid-based discretization, mHM-Nitrate balances the spatial representation and the modeling complexity. The model has been thoughtfully evaluated in the Selke catchment (456 km2), central Germany, which is characterized by heterogeneous physiographic conditions. Results show that the model captures well the long-term discharge and nitrate dynamics at three nested gauging stations. Using daily nitrate-N observations, the model is also validated in capturing short-term fluctuations due to changes in runoff partitioning and spatial contribution during flooding events. By comparing the model simulations with the values reported in the literature, the model is capable of providing detailed and reliable spatial information of nitrate concentrations and fluxes. Therefore, the model can be taken as a promising tool for environmental scientists in advancing environmental modeling research, as well as for stakeholders in supporting their decision-making, especially for spatially differentiated mitigation measures.
Secondly, a parsimonious approach of regionalizing the in-stream autotrophic nitrate uptake is proposed using high-frequency data and further integrated into the new mHM-Nitrate model. The new regionalization approach considers the potential uptake rate (as a general parameter) and effects of above-canopy light and riparian shading (represented by global radiation and leaf area index data, respectively). Multi-parameter sensors have been continuously deployed in a forest upstream reach and an agricultural downstream reach of the Selke River. Using the continuous high-frequency data in both streams, daily autotrophic uptake rates (2011-2015) are calculated and used to validate the regionalization approach. The performance and spatial transferability of the approach is validated in terms of well-capturing the distinct seasonal patterns and value ranges in both forest and agricultural streams. Integrating the approach into the mHM-Nitrate model allows spatiotemporal variability of in-stream nitrate transport and uptake to be investigated throughout the river network.
Thirdly, to further assess the spatial variability of catchment nitrate dynamics, for the first time the fully distributed parameterization is investigated through sensitivity analysis. Sensitivity results show that parameters of soil denitrification, in-stream denitrification and in-stream uptake processes are the most sensitive parameters throughout the Selke catchment, while they all show high spatial variability, where hot-spots of parameter sensitivity can be explicitly identified. The Spearman rank correlation is further analyzed between sensitivity indices and multiple catchment factors. The correlation identifies that the controlling factors vary spatially, reflecting heterogeneous catchment responses in the Selke catchment. These insights are, therefore, informative in informing future parameter regionalization schemes for catchment water quality models. In addition, the spatial distributions of parameter sensitivity are also influenced by the gauging information that is being used for sensitivity evaluation. Therefore, an appropriate monitoring scheme is highly recommended to truly reflect the catchment responses.
In recent years the development of renewable energy sources attracted much attention due to the increasing environmental pollution induced by burning fossil fuels. The growing public interest in reducing greenhouse gases and the use of pollution-free energies (bio-mass-, geothermal-, solar-, water- or wind energy) paved the way for scientific research in renewable energies. [1] Solar energy provides unlimited access and offers high applicational flexibility, which is needed for energy consumption in a modern society. The scientific interest in photovoltaics (PV) nowadays focuses on discovering new materials and improving materials properties, aiming for the production of highly efficient solar cells. Lately, a new type of absorber material based on the perovskite type structure reached power conversion efficiencies of more than 24%. [2] By varying the chemical composition the electronic properties as e.g. the band gap energy can be tuned to increase the absorption range of this absorber material. This makes them in particular attractive for use in tandem solar cells, where silicon and perovskite absorber layers are combined to absorb a large range of the vible light (28.0% efficiency). [2] However, perovskite based solar cells not only suffer from fast degradation when exposed to humidity, but also from the use of toxic elements (e.g. lead), which can result in long-term environmental damage. Therefore, the aim of this study was to determine the fundamental structural and optoelectronical properties of highly interesting hybrid perovskite materials, the MAPbX3 solid solution (MA=CH3NH3; X=I,Br,Cl) and the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution (FA=HC(NH2)2). The study was performed on powder samples by using X-ray diffraction, revealing the crystal structure and solubility behavior of all solid solutions. Moreover the temperature-dependent behavior was studied using in-situ high resolution synchrotron X-ray diffraction and combinatorial thermal analysis methods. The influence of compositional changes on the band gap energy variation were observed using spectroscopic methods as photoluminescence and diffuse reflectance spectroscopy. The obtained results have shown that for the MAPb(I1-xBrx)3 solid solution a large miscibility gap in the range of 0.29 ( ± 0.02) ≤ x ≤ 0.92 ( ± 0.02) is present. This miscibility gap limits the suitable compositional range for use in thin film solar cells of mixed halide compounds. From the temperature-dependent in-situ synchrotron X-ray diffraction studies the complete T-X-phase diagram was established. Studies on the MAPb(Cl1-xBrx)3 solid solution revealed that MAPb(Cl1-xBrx)3 forms a complete solid solution series. For the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution the aim was to study the formation of the d-modification in FAPbI3, which is undesired for solar cell application. This can be overcome by stabilizing the favored high temperature cubic a-modification at ambient conditions. By partial substituting the formamidinium molecule by methylammonium and cesium the stabilization of the cubic modification was successful. The solubility limit of FA1-xCsxPbI3 solid solution was determined to be x=0.1, while a full miscibility was observed for the FA1-xMAxPbI3 solid solution. For the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution a solubility limit of cesium was observed to be y=0.1. The optoelectronic properties were investigated, revealing a linear change of band gap energy with chemical composition. It is demonstrated that the stabilized triple cation compound with cubic perovskite-type crystal structure shows enhanced stability of approximately six months. Furthermore, a short insight into lead-free perovskite-type materials is given, using germanium as non-toxic alternative to lead. For germanium based perovskites a fast decomposition in air was observed, due to the preferred formation of GeI4 in oxygen atmosphere. In-situ low temperature synchrotron X-ray diffraction measurements revealed a yet unknown low temperature modification of MAGeI3. [1] WESSELAK, Viktor; SCHABBACH, Thomas; LINK, Thomas; FISCHER, Joachim: Handbuch Regenerative Energietechnik. Springer, 2017 [2] NREL: Best Research-Cell Efficiencies. https://www.nrel.gov/pv/assets/pdfs/best-research-cell-efficiencies-190416.pdf. – 25.04.2019
Socializing Development
(2020)
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Seismological and seismotectonic analysis of the northwestern Argentine Central Andean foreland
(2020)
After a severe M W 5.7 earthquake on October 17, 2015 in El Galpón in the province of Salta NW Argentina, I installed a local seismological network around the estimated epicenter. The network covered an area characterized by inherited Cretaceous normal faults and neotectonic faults with unknown recurrence intervals, some of which may have been reactivated normal faults. The 13 three-component seismic stations recorded data continuously for 15 months.
The 2015 earthquake took place in the Santa Bárbara System of the Andean foreland, at about 17km depth. This region is the easternmost morphostructural region of the central Andes. As a part of the broken foreland, it is bounded to the north by the Subandes fold-and-thrust belt and the Sierras Pampeanas to the south; to the east lies the Chaco-Paraná basin.
A multi-stage morphotectonic evolution with thick-skinned basement uplift and coeval thin-skinned deformation in the intermontane basins is suggested for the study area. The release of stresses associated with the foreland deformation can result in strong earthquakes, as the study area is known for recurrent and historical, destructive earthquakes. The available continuous record reaches back in time, when the strongest event in 1692 (magnitude 7 or intensity IX) destroyed the city of Esteco. Destructive earthquakes and surface deformation are thus a hallmark of this part of the Andean foreland.
With state-of-the-art Python packages (e.g. pyrocko, ObsPy), a semi-automatic approach is followed to analyze the collected continuous data of the seismological network. The resulting 1435 hypocenter locations consist of three different groups: 1.) local crustal earthquakes (nearly half of the events belong to this group), 2.) interplate activity, of regional distance in the slab of the Nazca-plate, and 3.) very deep earthquakes at about 600km depth. My major interest focused on the first event class. Those crustal events are partly aftershock events of the El Galpón earthquake and a second earthquake, in the south of the same fault. Further events can be considered as background seismicity of other faults within the study area. Strikingly, the seismogenic zone encompass the whole crust and propagates brittle deformation down, close to the Moho.
From the collected seismological data, a local seismic velocity model is estimated, using VELEST. After the execution of various stability tests, the robust minimum 1D-velocity model implies guiding values for the composition of the local, subsurface structure of the crust. Afterwards, performing a hypocenter relocation enables the assignment of individual earthquakes to aftershock clusters or extended seismotectonic structures. This allows the mapping of previously unknown seismogenic faults.
Finally, focal mechanisms are modeled for events with acurately located hypocenters, using the newly derived local velocity model. A compressive regime is attested by the majority of focal mechanisms, while the strike direction of the individual seismogenic structures is in agreement with the overall north – south orientation of the Central Andes, its mountain front, and individual mountain ranges in the southern Santa-Bárbara-System.
Galaxies are gravitationally bound systems of stars, gas, dust and - probably - dark matter. They are the building blocks of the Universe. The morphology of galaxies is diverse: some galaxies have structures such as spirals, bulges, bars, rings, lenses or inner disks, among others. The main processes that characterise galaxy evolution can be separated into fast violent events that dominated evolution at earlier times and slower processes, which constitute a phase called secular evolution, that became dominant at later times. Internal processes of secular evolution include the gradual rearrangement of matter and angular momentum, the build-up and dissolution of substructures or the feeding of supermassive black holes and their feedback. Galaxy bulges – bright central components in disc galaxies –, on one hand, are relics of galaxy formation and evolution. For instance, the presence of a classical bulge suggests a relatively violent history. In contrast, the presence of a disc-like bulge instead indicates the occurrence of secular evolution processes in the main disc. Galaxy bars – elongated central stellar structures –, on the other hand, are the engines of secular evolution. Studying internal properties of both bars and bulges is key to comprehending some of the processes through which secular evolution takes place. The main objectives of this thesis are (1) to improve the classification of bulges by combining photometric and spectroscopic approaches for a large sample of galaxies, (2) to quantify star formation in bars and verify dependencies on galaxy properties and (3) to analyse stellar populations in bars to aid in understanding the formation and evolution of bars. Integral field spectroscopy is fundamental to the work presented in this thesis, which consists of three different projects as part of three different galaxy surveys: the CALIFA survey, the CARS survey and the TIMER project.
The first part of this thesis constitutes an investigation of the nature of bulges in disc galaxies. We analyse 45 galaxies from the integral-field spectroscopic survey CALIFA by performing 2D image decompositions, growth curve measurements and spectral template fitting to derive stellar kinematics from CALIFA data cubes. From the obtained results, we present a recipe to classify bulges that combines four different parameters from photometry and kinematics: The bulge Sersic index nb, the concentration index C20;50, the Kormendy relation and the inner slope of the radial velocity dispersion profile ∇σ. The results of the different approaches are in good agreement and allow a safe classification for approximately 95% of the galaxies. We also find that our new ‘inner’ concentration index performs considerably better than the traditionally used C50;90 and, in combination with the Kormendy relation, provides a very robust indication of the physical nature of the bulge. In the second part, we study star formation within bars using VLT/MUSE observations for 16 nearby (0.01 < z < 0.06) barred active-galactic-nuclei (AGN)-host galaxies from the CARS survey. We derive spatially-resolved star formation rates (SFR) from Hα emission line fluxes and perform a detailed multi-component photometric decomposition on images derived from the data cubes. We find a clear separation into eight star-forming (SF) and eight non-SF bars, which we interpret as indication of a fast quenching process. We further report a correlation between the SFR in the bar and the shape of the bar surface brightness profile: only the flattest bars (nbar < 0.4) are SF. Both parameters are found to be uncorrelated with Hubble type. Additionally, owing to the high spatial resolution of the MUSE data cubes, for the first time, we are able to dissect the SFR within the bar and analyse trends parallel and perpendicular to the bar major axis. Star formation is 1.75 times stronger on the leading edge of a rotating bar than on the trailing edge and is radially decreasing. Moreover, from testing an AGN feeding scenario, we report that the SFR of the bar is uncorrelated with AGN luminosity. Lastly, we present a detailed analysis of star formation histories and chemical enrichment of stellar populations (SP) in galaxy bars. We use MUSE observations of nine very nearby barred galaxies from the TIMER project to derive spatially resolved maps of stellar ages and metallicities, [α/Fe] abundances, star formation histories, as well as Hα as tracer of star formation. Using these maps, we explore in detail variations of SP perpendicular to the bar major axes. We find observational evidence for a separation of SP, supposedly caused by an evolving bar. Specifically, intermediate-age stars (∼ 2-6 Gyr) get trapped on more elongated orbits forming a thinner bar, while old stars (> 8 Gyr) form a rounder and thicker bar. This evidence is further strengthened by very similar results obtained from barred galaxies in the cosmological zoom-in simulations from the Auriga project. In addition, we find imprints of typical star formation patterns in barred galaxies on the youngest populations (< 2 Gyr), which continuously become more dominant from the major axis towards the sides of the bar. The effect is slightly stronger on the leading side. Furthermore, we find that bars are on average more metal-rich and less α-enhanced than the inner parts of the discs that surrounds them. We interpret this result as an indication of a more prolonged or continuous formation of stars that shape the bar as compared to shorter formation episodes in the disc within the bar region.
Geomorphology seeks to characterize the forms, rates, and magnitudes of sediment and water transport that sculpt landscapes. This is generally referred to as earth surface processes, which incorporates the influence of biologic (e.g., vegetation), climatic (e.g., rainfall), and tectonic (e.g., mountain uplift) factors in dictating the transport of water and eroded material. In mountains, high relief and steep slopes combine with strong gradients in rainfall and vegetation to create dynamic expressions of earth surface processes. This same rugged topography presents challenges in data collection and process measurement, where traditional techniques involving detailed observations or physical sampling are difficult to apply at the scale of entire catchments. Herein lies the utility of remote sensing. Remote sensing is defined as any measurement that does not disturb the natural environment, typically via acquisition of images in the visible- to radio-wavelength range of the electromagnetic spectrum. Remote sensing is an especially attractive option for measuring earth surface processes, because large areal measurements can be acquired at much lower cost and effort than traditional methods. These measurements cover not only topographic form, but also climatic and environmental metrics, which are all intertwined in the study of earth surface processes. This dissertation uses remote sensing data ranging from handheld camera-based photo surveying to spaceborne satellite observations to measure the expressions, rates, and magnitudes of earth surface processes in high-mountain catchments of the Eastern Central Andes in Northwest Argentina. This work probes the limits and caveats of remote sensing data and techniques applied to geomorphic research questions, and presents important progress at this disciplinary intersection.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
To find out the future of nowadays reef ecosystem turnover under the environmental stresses such as global warming and ocean acidification, analogue studies from the geologic past are needed. As a critical time of reef ecosystem innovation, the Permian-Triassic transition witnessed the most severe demise of Phanerozoic reef builders, and the establishment of modern style symbiotic relationships within the reef-building organisms. Being the initial stage of this transition, the Middle Permian (Capitanian) mass extinction coursed a reef eclipse in the early Late Permian, which lead to a gap of understanding in the post-extinction Wuchiapingian reef ecosystem, shortly before the radiation of Changhsingian reefs. Here, this thesis presents detailed biostratigraphic, sedimentological, and palaeoecological studies of the Wuchiapingian reef recovery following the Middle Permian (Capitanian) mass extinction, on the only recorded Wuchiapingian reef setting, outcropping in South China at the Tieqiao section.
Conodont biostratigraphic zonations were revised from the Early Permian Artinskian to the Late Permian Wuchiapingian in the Tieqiao section. Twenty main and seven subordinate conodont zones are determined at Tieqiao section including two conodont zone below and above the Tieqiao reef complex. The age of Tieqiao reef was constrained as early to middle Wuchiapingian.
After constraining the reef age, detailed two-dimensional outcrop mapping combined with lithofacies study were carried out on the Wuchiapingian Tieqiao Section to investigate the reef growth pattern stratigraphically as well as the lateral changes of reef geometry on the outcrop scale. Semi-quantitative studies of the reef-building organisms were used to find out their evolution pattern within the reef recovery. Six reef growth cycles were determined within six transgressive-regressive cycles in the Tieqiao section. The reefs developed within the upper part of each regressive phase and were dominated by different biotas. The timing of initial reef recovery after the Middle Permian (Capitanian) mass extinction was updated to the Clarkina leveni conodont zone, which is earlier than previous understanding. Metazoans such as sponges were not the major components of the Wuchiapingian reefs until the 5th and 6th cycles. So, the recovery of metazoan reef ecosystem after the Middle Permian (Capitanian) mass extinction was obviously delayed. In addition, although the importance of metazoan reef builders such as sponges did increase following the recovery process, encrusting organisms such as Archaeolithoporella and Tubiphytes, combined with microbial carbonate precipitation, still played significant roles to the reef building process and reef recovery after the mass extinction.
Based on the results from outcrop mapping and sedimentological studies, quantitative composition analysis of the Tieqiao reef complex were applied on selected thin sections to further investigate the functioning of reef building components and the reef evolution after the Middle Permian (Capitanian) mass extinction. Data sets of skeletal grains and whole rock components were analyzed. The results show eleven biocommunity clusters/eight rock composition clusters dominated by different skeletal grains/rock components. Sponges, Archaeolithoporella and Tubiphytes were the most ecologically important components within the Wuchiapingian Tieqiao reef, while the clotted micrites and syndepositional cements are the additional important rock components for reef cores. The sponges were important within the whole reef recovery. Tubiphytes were broadly distributed in different environments and played a key-role in the initial reef communities. Archaeolithoporella concentrated in the shallower part of reef cycles (i.e., the upper part of reef core) and was functionally significant for the enlargement of reef volume.
In general, the reef recovery after the Middle Permian (Capitanian) mass extinction has some similarities with the reef recovery following the end-Permian mass extinction. It shows a delayed recovery of metazoan reefs and a stepwise recovery pattern that was controlled by both ecological and environmental factors. The importance of encrusting organisms and microbial carbonates are also similar to most of the other post-extinction reef ecosystems. These findings can be instructive to extend our understanding of the reef ecosystem evolution under environmental perturbation or stresses.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
The idea of critical childhood studies is a relatively young disciplinary undertaking in eastern Africa. And so, a lot of inquiries have not been carried out. This field is a potential important socio-political marker, among others, of some narratives, that have emerged out of eastern Africa. Towards this end, my research seeks out an archaeology of childhood in eastern Africa. There is a monochromatic hue which has often painted the eastern African childhood. This broad stroke portrays the childhood as characterized by want. The image of the eastern African childhood is composed in terms of the war-child, poverty, disease-ridden, and aid-begging. The pitfall of this consciousness is that it erases a differentiated and pluralist nature of the eastern African childhood. Therefore, I hypothesise that childhood is a discourse from which institutional vectors become conduits of certain statement-making both process-wise and content-wise. As such a critical childhood study is a theatre of staging and unearthing its joys, tribulations, cultural constructions, and even political interventions. To this end childhood and its literatures not only reflect but also contribute to meaning making and worldliness thereof. As an attempt to move from an un-nuanced depiction, which is often monodirectional, I seek to present a chronologically synchronic and diachronic analysis of childhood in the eastern Africa. Accordingly, I excavate a chronological construction of childhood within this geopolitical region. The main conceptual anchorage is Francis Nyamnjoh who tells of the African occupying a life on convivial frontiers. He theorises an Africa that is involved in technologies of self-definition that privilege conversations, fluidity of being and relational connections on a globalised scale. I also appropriate the notion of Bula Matadi from the Congo as a decolonialist epistemological exercise to break apart polarising representations and practices of childhood in eastern Africa. This opens a space for an unbounded reconfiguration of childhood in eastern Africa. This book works on and with archival matter, in a cross-disciplinary manner and ranges from pre-colonial to post-colonial eastern Africa. It is an exploration of the trajectory of the discourse of childhood in eastern Africa, in order to eclectically investigate childhood in eastern Africa, in fictional and non-fictional representations.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Percolation process, which is intrinsically a phase transition process near the critical point, is ubiquitous in nature. Many of its applications embrace a wide spectrum of natural phenomena ranging from the forest fires, spread of contagious diseases, social behaviour dynamics to mathematical finance, formation of bedrocks and biological systems. The topology generated by the percolation process near the critical point is a random (stochastic) fractal. It is fundamental to the percolation theory that near the critical point, a unique infinite fractal structure, namely the infinite cluster, would emerge. As de Gennes suggested, the properties of the infinite cluster could be deduced by studying the dynamical behaviour of the random walk process taking place on it. He coined the term the ant in the labyrinth. The random walk process on such an infinite fractal cluster exhibits a subdiffusive dynamics in the sense that the mean squared displacement grows as ~t2/dw, where dw, called the fractal dimension of the random walk path, is greater than 2. Thus, the random walk process on the infinite cluster is classified as a process exhibiting the properties of anomalous diffusions. Yet near the critical point, the infinite cluster is not the sole emergent topology, but it coexists with other clusters whose size is finite. Though finite, on specific length scales these finite clusters exhibit fractal properties as well. In this work, it is assumed that the random walk process could take place on these finite size objects as well. Bearing this assumption in mind requires one address the non-equilibrium initial condition. Due to the lack of knowledge on the propagator of the random walk process in stochastic random environments, a phenomenological correspondence between the renowned Ornstein-Uhlenbeck process and the random walk process on finite size clusters is established. It is elucidated that when an ensemble of these finite size clusters and the infinite cluster is considered, the anisotropy and size of these finite clusters effects the mean squared displacement and its time averaged counterpart to grow in time as ~t(d+df (t-2))/dw, where d is the embedding Euclidean dimension, df is the fractal dimension of the infinite cluster, and , called the Fisher exponent, is a critical exponent governing the power-law distribution of the finite size clusters. Moreover, it is demonstrated that, even though the random walk process on a specific finite size cluster is ergodic, it exhibits a persistent non-ergodic behaviour when an ensemble of finite size and the infinite clusters is considered.
Hydrological models are important tools for the simulation and quantification of the water cycle.
They therefore aid in the understanding of hydrological processes, prediction of river discharge, assessment of the impacts of land use and climate changes, or the management of water resources.
However, uncertainties associated with hydrological modelling are still large.
While significant research has been done on the quantification and reduction of uncertainties, there are still fields which have gained little attention so far, such as model structural uncertainties that are related to the process implementations in the models.
This holds especially true for complex process-based models in contrast to simpler conceptual models.
Consequently, the aim of this thesis is to improve the understanding of structural uncertainties with focus on process-based hydrological modelling, including methods for their quantification.
To identify common deficits of frequently used hydrological models and develop further strategies on how to reduce them, a survey among modellers was conducted.
It was found that there is a certain degree of subjectivity in the perception of modellers, for instance with respect to the distinction of hydrological models into conceptual groups.
It was further found that there are ambiguities on how to apply a certain hydrological model, for instance how many parameters should be calibrated, together with a large diversity of opinion regarding the deficits of models.
Nevertheless, evapotranspiration processes are often represented in a more physically based manner, while processes of groundwater and soil water movement are often simplified, which many survey participants saw as a drawback.
A large flexibility, for instance with respect to different alternative process implementations or a small number of parameters that needs to be calibrated, was generally seen as strength of a model.
Flexible and efficient software, which is straightforward to apply, has been increasingly acknowledged by the hydrological community.
This work further elaborated on this topic in a twofold way.
First, a software package for semi-automated landscape discretisation has been developed, which serves as a tool for model initialisation.
This was complemented by a sensitivity analysis of important and commonly used discretisation parameters, of which the size of hydrological sub-catchments as well as the size and number of hydrologically uniform computational units appeared to be more influential than information considered for the characterisation of hillslope profiles.
Second, a process-based hydrological model has been implemented into a flexible simulation environment with several alternative process representations and a number of numerical solvers.
It turned out that, even though computation times were still long, enhanced computational capabilities nowadays in combination with innovative methods for statistical analysis allow for the exploration of structural uncertainties of even complex process-based models, which up to now was often neglected by the modelling community.
In a further study it could be shown that process-based models may even be employed as tools for seasonal operational forecasting.
In contrast to statistical models, which are faster to initialise and to apply, process-based models produce more information in addition to the target variable, even at finer spatial and temporal scales, and provide more insights into process behaviour and catchment functioning.
However, the process-based model was much more dependent on reliable rainfall forecasts.
It seems unlikely that there exists a single best formulation for hydrological processes, even for a specific catchment.
This supports the use of flexible model environments with alternative process representations instead of a single model structure.
However, correlation and compensation effects between process formulations, their parametrisation, and other aspects such as numerical solver and model resolution, may lead to surprising results and potentially misleading conclusions.
In future studies, such effects should be more explicitly addressed and quantified.
Moreover, model functioning appeared to be highly dependent on the meteorological conditions and rainfall input generally was the most important source of uncertainty.
It is still unclear, how this could be addressed, especially in the light of the aforementioned correlations.
The use of innovative data products, e.g.\ remote sensing data in combination with station measurements, and efficient processing methods for the improvement of rainfall input and explicit consideration of associated uncertainties is advisable to bring more insights and make hydrological simulations and predictions more reliable.
This dissertation aims to deliver a transcendental interpretation of Immanuel Kant's Kritik der Urteilskraft, considering both its coherence with other critical works as well as the internal coherence of the work itself. This interpretation is called transcendental insofar as special emphasis is placed on the newly introduced cognitive power, namely the reflective power of judgement, guided by the a priori principle of purposiveness. In this way the seeming manifold of themes, varying from judgements of taste through culture to teleological judgements about natural purposes, are discussed exclusively in regard of their dependence on this faculty and its transcendental principle. In contrast, in contemporary scholarship the book is often treated as a fragmented work, consisting of different independent parts, while my focus lies on the continuity comprised primarily of the activity of the power of judgement.
Going back to certain central yet silently presupposed concepts, adopted from previous critical works, the main contribution of this study is to integrate the KU within the overarching critical project. More specifically, I have argue how the need for the presupposition by the reflective power of judgement follows from the peculiar character of our sense-dependent discursive mind. Because we are sense-dependent discursive minds, we do not and cannot have immediate insight into all of nature's features. The particular constitution of our mind rather demands conceptually informed representations which mediately refer to objects.
Having said that, the principle of purposiveness, namely the presupposition that nature is organized in concert with the particular constitution of our mind, is a necessary condition for the possibility of reflection on nature's empirical features. Reflection refers on my account to a process of selecting features in order to allow a classification, including reflection on the method, means and selection criteria. Rather than directly contributing to cognition, like the categories, reflective judgements thus express our ignorance when it comes to the motivation behind nature's design, and this is most forcefully expressed by judgements of taste and teleological judgements about organized matter. In this way, reflection, regardless whether it is manifested in concept acquisition, scientific systematization, judgements of taste or judgements about organized matter, relies on a principle of the power of judgement which is revealed and justified in this transcendental inquiry.
The electronic charge distributions of transition metal complexes fundamentally determine their chemical reactivity. Experimental access to the local valence electronic structure is therefore crucial in order to determine how frontier orbitals are delocalized between different atomic sites and electronic charge is spread throughout the transition metal complex. To that end, X-ray spectroscopies are employed in this thesis to study a series of solution-phase iron complexes with respect to the response of their local electronic charge distributions to different external influences. Using resonant inelastic X-ray scattering (RIXS) and X-ray absorption spectroscopy (XAS) at the iron L-edge, changes in local charge densities are investigated at the iron center depending on different ligand cages as well as solvent environments. A varying degree of charge delocalization from the metal center onto the ligands is observed, which is governed by the capabilities of the ligands to accept charge density into their unoccupied orbitals. Specific solvents are furthermore shown to amplify this process. Solvent molecules of strong Lewis-acids withdraw charge from the ligand allowing in turn for more metal charge to be delocalized onto the ligand. The resulting local charge deficiencies at the metal center are, however, counteracted by competing electron-donation channels from the ligand towards the iron, which are additionally revealed. This is interpreted as a compensating effect which strives to maintain local charge densities at the iron center. This mechanism of charge density preservation is found to be of general nature. Using time-resolved RIXS and XAS at the iron L-edge, an analogous interplay of electron donation and back-donation channels is also revealed for the case of charge-transfer excited states. In such transient configurations, the electronic occupation of iron-centered frontier orbitals has been altered by an optical excitation. Changes in local charge densities that are expected to follow an increased or decreased population of iron-centered orbitals are, however, again counteracted. By scaling the degree of electron donation from the ligand onto the metal, local charge densities at the iron center can be efficiently maintained. Since charge-transfer excitations, however, often constitute the initial step in many electron transfer processes, these findings challenge common notions of charge-separation in transition metal dyes.
In recent years, a substantial number of psycholinguistic studies and of studies on acquired language impairments have investigated the case of morphologically complex words. These have provided evidence for what is known as ‘morphological decomposition’, i.e. a mechanism that decomposes complex words into their constituent morphemes during online processing. This is believed to be a fundamental, possibly universal mechanism of morphological processing, operating irrespective of a word’s specific properties.
However, current accounts of morphological decomposition are mostly based on evidence from suffixed words and compound words, while prefixed words have been comparably neglected. At the same time, it has been consistently observed that, across languages, prefixed words are less widespread than suffixed words. This cross-linguistic preference for suffixing morphology has been claimed to be grounded in language processing and language learning mechanisms. This would predict differences in how prefixed words are processed and therefore also affected in language impairments, challenging the predictions of the major accounts of morphological decomposition.
Against this background, the present thesis aims at reducing the gap between the accounts of morphological decomposition and the accounts of the suffixing preference, by providing a thorough empirical investigation of prefixed words. Prefixed words are examined in three different domains: (i) visual word processing in native speakers; (ii) visual word processing in non-native speakers; (iii) acquired morphological impairments. The processing studies employ the masked priming paradigm, tapping into early stages of visual word recognition. Instead, the studies on morphological impairments investigate the errors produced in reading aloud tasks.
As for native processing, the present work first focuses on derivation (Publication I), specifically investigating whether German prefixed derived words, both lexically restricted (e.g. inaktiv ‘inactive’) and unrestricted (e.g. unsauber ‘unclean’) can be efficiently decomposed. I then present a second study (Publication II) on a Bantu language, Setswana, which offers the unique opportunity of testing inflectional prefixes, and directly comparing priming with prefixed inflected primes (e.g. dikgeleke ‘experts’) to priming with prefixed derived primes (e.g. bokgeleke ‘talent’). With regard to non-native processing (Publication I), the priming effects obtained from the lexically restricted and unrestricted prefixed derivations in native speakers are additionally compared to the priming effects obtained in a group of non-native speakers of German. Finally, in the two studies on acquired morphological impairments, the thesis investigates whether prefixed derived words yield different error patterns than suffixed derived words (Publication III and IV).
For native speakers, the results show evidence for morphological decomposition of both types of prefixed words, i.e. lexically unrestricted and restricted derivations, as well as of prefixed inflected words. Furthermore, non-native speakers are also found to efficiently decompose prefixed derived words, with parallel results to the group of native speakers. I therefore conclude that, for the early stages of visual word recognition, the relative position of stem and affix in prefixed versus suffixed words does not affect how efficiently complex words are decomposed, either in native or in non-native processing. In the studies on acquired language impairments, instead, prefixes are consistently found to be more impaired than suffixes. This is explained in terms of a learnability disadvantage for prefixed words, which may cause weaker representations of the information encoded in affixes when these precede the stem (prefixes) as compared to when they follow it (suffixes). Based on the impairment profiles of the individual participants and on the nature of the task, this dissociation is assumed to emerge from later processing stages than those that are tapped into by masked priming. I therefore conclude that the different characteristics of prefixed and suffixed words do come into play at later processing stages, during which the lexical-semantic information contained in the different constituent morphemes is processed.
The findings presented in the four manuscripts significantly contribute to our current understanding of the mechanisms involved in processing prefixed words. Crucially, the thesis constrains the processing disadvantage for prefixed words to later processing stages, thereby suggesting that theories trying to establish links between language universals and processing mechanisms should more carefully consider the different stages involved in language processing and what factors are relevant for each specific stage.
The metabolic state of an organism reflects the entire phenotype that is jointly affected by genetic and environmental changes. Due to the complexity of metabolism, system-level modelling approaches have become indispensable tools to obtain new insights into biological functions. In particular, simulation and analysis of metabolic networks using constraint-based modelling approaches have helped the analysis of metabolic fluxes. However, despite ongoing improvements in prediction of reaction flux through a system, approaches to directly predict metabolite concentrations from large-scale metabolic networks remain elusive. In this thesis, we present a computational approach for inferring concentration ranges from genome-scale metabolic models endowed with mass action kinetics. The findings specify a molecular mechanism underling facile control of concentration ranges for components in large-scale metabolic networks. Most importantly, an extended version of the approach can be used to predict concentration ranges without knowledge of kinetic parameters, provided measurements of concentrations in a reference state. We show that the approach is applicable with large-scale kinetic and stoichiometric metabolic models of organisms from different kingdoms of life. By challenging the predictions of concentration ranges in the genome-scale metabolic network of Escherichia coli with real-world data sets, we further demonstrate the prediction power and limitations of the approach. To predict concentration ranges in other species, e.g. model plant species Arabidopsis thaliana, we would rely on estimates of kinetic parameters (i.e. enzyme catalytic rates) since plant-specific enzyme catalytic rates are poorly documented. Using the constraint-based approach of Davidi et al. for estimation of enzyme catalytic rates, we obtain values for 168 plant enzymes. The approach depends on quantitative proteomics data and flux estimates obtained from constraint-based model of plant leaf metabolism integrating maximal rates of selected enzymes, plant-specific constraints on fluxes through canonical pathways, and growth measurements from Arabidopsis thaliana rosette under ten conditions. We demonstrate a low degree of plant enzyme saturation, supported by the agreement between concentrations of nicotinamide adenine dinucleotide, adenosine triphosphate, and glyceraldehyde 3-phosphate, based on our maximal in vivo catalytic rates, and available quantitative metabolomics data. Hence, our results show genome-wide estimation for plant-specific enzyme catalytic rates is feasible. These can now be readily employed to study resource allocation, to predict enzyme and metabolite concentrations using recent constrained-based modelling approaches. Constraint-based methods do not directly account for kinetic mechanisms and corresponding parameters. Therefore, a number of workflows have already been proposed to approximate reaction kinetics and to parameterize genome-scale kinetic models. We present a systems biology strategy to build a fully parameterized large-scale model of Chlamydomonas reinhardtii accounting for microcompartmentalization in the chloroplast stroma. Eukaryotic algae comprise a microcompartment, the pyrenoid, essential for the carbon concentrating mechanism (CCM) that improves their photosynthetic performance. Since the experimental study of the effects of microcompartmentation on metabolic pathways is challenging, we employ our model to investigate compartmentation of fluxes through the Calvin-Benson cycle between pyrenoid and stroma. Our model predicts that ribulose-1,5-bisphosphate, the substrate of Rubisco, and 3-phosphoglycerate, its product, diffuse in and out of the pyrenoid. We also find that there is no major diffusional barrier to metabolic flux between the pyrenoid and stroma. Therefore, our computational approach represents a stepping stone towards understanding of microcompartmentalized CCM in other organisms. This thesis provides novel strategies to use genome-scale metabolic networks to predict and integrate metabolite concentrations. Therefore, the presented approaches represent an important step in broadening the applicability of large-scale metabolic models to a range of biotechnological and medical applications.