Refine
Has Fulltext
- yes (66) (remove)
Year of publication
- 2012 (66) (remove)
Document Type
- Doctoral Thesis (66) (remove)
Language
- English (66) (remove)
Keywords
- Blickbewegungen (2)
- Computationale Modellierung (2)
- Fernerkundung (2)
- Klimawandel (2)
- Korrosion (2)
- Lake sediments (2)
- Machine Learning (2)
- Magnetismus (2)
- Maschinelles Lernen (2)
- Nanopartikel (2)
Institute
- Institut für Physik und Astronomie (18)
- Institut für Biochemie und Biologie (11)
- Institut für Chemie (10)
- Institut für Geowissenschaften (10)
- Extern (6)
- Institut für Informatik und Computational Science (6)
- Institut für Mathematik (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Wirtschaftswissenschaften (3)
- Department Psychologie (2)
The aim of this thesis is the quantum dynamical study of two examples of scanning tunneling microscope (STM)-controllable, Si(100)(2x1) surface-mounted switches of atomic and molecular scale. The first example considers the switching of single H-atoms between two dangling-bond chemisorption sites on a Si-dimer of the Si(100) surface (Grey et al., 1996). The second system examines the conformational switching of single 1,5-cyclooctadiene molecules chemisorbed on the Si(100) surface (Nacci et al., 2008). The temporal dynamics are provided by the propagation of the density matrix in time via an according set of equations of motion (EQM). The latter are based on the open-system density matrix theory in Lindblad form. First order perturbation theory is used to evaluate those transition rates between vibrational levels of the system part. In order to account for interactions with the surface phonons, two different dissipative models are used, namely the bilinear, harmonic and the Ohmic bath model. IET-induced vibrational transitions in the system are due to the dipole- and the resonance-mechanism. A single surface approach is used to study the influence of dipole scattering and resonance scattering in the below-threshold regime. Further, a second electronic surface was included to study the resonance-induced switching in the above-threshold regime. Static properties of the adsorbate, e.g., potentials and dipole function and potentials, are obtained from quantum chemistry and used within the established quantum dynamical models.
Taking advantage of ATRP and using functionalized initiators, different functionalities were introduced in both α and ω chain-ends of synthetic polymers. These functionalized polymers could then go through modular synthetic pathways such as click cycloaddition (copper-catalyzed or copper-free) or amidation to couple synthetic polymers to other synthetic polymers, biomolecules or silica monoliths. Using this general strategy and designing these co/polymers so that they are thermoresponsive, yet bioinert and biocompatible with adjustable cloud point values (as it is the case in the present thesis), the whole generated system becomes "smart" and potentially applicable in different branches. The applications which were considered in the present thesis were in polymer post-functionalization (in situ functionalization of micellar aggregates with low and high molecular weight molecules), hydrophilic/hydrophobic tuning, chromatography and bioconjugation (enzyme thermoprecipitation and recovery, improvement of enzyme activity). Different α-functionalized co/polymers containing cholesterol moiety, aldehyde, t-Boc protected amine, TMS-protected alkyne and NHS-activated ester were designed and synthesized in this work.
The underlying motivation for the work carried out for this thesis was the growing need for more sustainable technologies. The aim was to synthesize a “palette” of functional nanomaterials using the established technique of hydrothermal carbonization (HTC). The incredible diversity of HTC was demonstrated together with small but steady advances in how HTC can be manipulated to tailor material properties for specific applications. Two main strategies were used to modify the materials obtained by HTC of glucose, a model precursor representing biomass. The first approach was the introduction of heteroatoms, or “doping” of the carbon framework. Sulfur was for the first time introduced as a dopant in hydrothermal carbon. The synthesis of sulfur and sulfur/nitrogen doped microspheres was presented whereby it was shown that the binding state of sulfur could be influenced by varying the type of sulfur source. Pyrolysis may additionally be used to tune the heteroatom binding states which move to more stable motifs with increasing pyrolysis temperature. Importantly, the presence of aromatic binding states in the as synthesized hydrothermal carbon allows for higher heteroatom retention levels after pyrolysis and hence more efficient use of dopant sources. In this regard, HTC may be considered as an “intermediate” step in the formation of conductive heteroatom doped carbon. To assess the novel hydrothermal carbons in terms of their potential for electrochemical applications, materials with defined nano-architectures and high surface areas were synthesized via templated, as well as template-free routes. Sulfur and/or nitrogen doped carbon hollow spheres (CHS) were synthesized using a polystyrene hard templating approach and doped carbon aerogels (CA) were synthesized using either the albumin directed or borax-mediated hydrothermal carbonization of glucose. Electrochemical testing showed that S/N dual doped CHS and aerogels derived via the albumin approach exhibited superior catalytic performance compared to solely nitrogen or sulfur doped counterparts in the oxygen reduction reaction (ORR) relevant to fuel cells. Using the borax mediated aerogel formation, nitrogen content and surface area could be tuned and a carbon aerogel was engineered to maximize electrochemical performance. The obtained sample exhibited drastically improved current densities compared to a platinum catalyst (but lower onset potential), as well as excellent long term stability. In the second approach HTC was carried out at elevated temperatures (550 °C) and pressure (50 bar), corresponding to the superheated vapor regime (htHTC). It was demonstrated that the carbon materials obtained via htHTC are distinct from those obtained via ltHTC and subsequent pyrolysis at 550 °C. No difference in htHTC-derived material properties could be observed between pentoses and hexoses. The material obtained from a polysaccharide exhibited a slightly lower degree of carbonization but was otherwise similar to the monosaccharide derived samples. It was shown that in addition to thermally induced carbonization at 550 °C, the SHV environment exhibits a catalytic effect on the carbonization process. The resulting materials are chemically inert (i.e. they contain a negligible amount of reactive functional groups) and possess low surface area and electronic conductivity which distinguishes them from carbon obtained from pyrolysis. Compared to the materials presented in the previous chapters on chemical modifications of hydrothermal carbon, this makes them ill-suited candidates for electronic applications like lithium ion batteries or electrocatalysts. However, htHTC derived materials could be interesting for applications that require chemical inertness but do not require specific electronic properties. The final section of this thesis therefore revisited the latex hard templating approach to synthesize carbon hollow spheres using htHTC. However, by using htHTC it was possible to carry out template removal in situ because the second heating step at 550 °C was above the polystyrene latex decomposition temperature. Preliminary tests showed that the CHS could be dispersed in an aqueous polystyrene latex without monomer penetrating into the hollow sphere voids. This leaves the stagnant air inside the CHS intact which in turn is promising for their application in heat and sound insulating coatings. Overall the work carried out in this thesis represents a noteworthy development in demonstrating the great potential of sustainable carbon materials.
Agriculture is one of the most important human activities providing food and more agricultural goods for seven billion people around the world and is of special importance in sub-Saharan Africa. The majority of people depends on the agricultural sector for their livelihoods and will suffer from negative climate change impacts on agriculture until the middle and end of the 21st century, even more if weak governments, economic crises or violent conflicts endanger the countries’ food security. The impact of temperature increases and changing precipitation patterns on agricultural vegetation motivated this thesis in the first place. Analyzing the potentials of reducing negative climate change impacts by adapting crop management to changing climate is a second objective of the thesis. As a precondition for simulating climate change impacts on agricultural crops with a global crop model first the timing of sowing in the tropics was improved and validated as this is an important factor determining the length and timing of the crops´ development phases, the occurrence of water stress and final crop yield. Crop yields are projected to decline in most regions which is evident from the results of this thesis, but the uncertainties that exist in climate projections and in the efficiency of adaptation options because of political, economical or institutional obstacles have to be considered. The effect of temperature increases and changing precipitation patterns on crop yields can be analyzed separately and varies in space across the continent. Southern Africa is clearly the region most susceptible to climate change, especially to precipitation changes. The Sahel north of 13° N and parts of Eastern Africa with short growing seasons below 120 days and limited wet season precipitation of less than 500 mm are also vulnerable to precipitation changes while in most other part of East and Central Africa, in contrast, the effect of temperature increase on crops overbalances the precipitation effect and is most pronounced in a band stretching from Angola to Ethiopia in the 2060s. The results of this thesis confirm the findings from previous studies on the magnitude of climate change impact on crops in sub-Saharan Africa but beyond that helps to understand the drivers of these changes and the potential of certain management strategies for adaptation in more detail. Crop yield changes depend on the initial growing conditions, on the magnitude of climate change, and on the crop, cropping system and adaptive capacity of African farmers which is only now evident from this comprehensive study for sub-Saharan Africa. Furthermore this study improves the representation of tropical cropping systems in a global crop model and considers the major food crops cultivated in sub-Saharan Africa and climate change impacts throughout the continent.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
Eye movements are a powerful tool to examine cognitive processes. However, in most paradigms little is known about the dynamics present in sequences of saccades and fixations. In particular, the control of fixation durations has been widely neglected in most tasks. As a notable exception, both spatial and temporal aspects of eye-movement control have been thoroughly investigated during reading. There, the scientific discourse was dominated by three controversies, (i), the role of oculomotor vs. cognitive processing on eye-movement control, (ii) the serial vs. parallel processing of words, and, (iii), the control of fixation durations. The main purpose of this thesis was to investigate eye movements in tasks that require sequences of fixations and saccades. While reading phenomena served as a starting point, we examined eye guidance in non-reading tasks with the aim to identify general principles of eye-movement control. In addition, the investigation of eye movements in non-reading tasks helped refine our knowledge about eye-movement control during reading. Our approach included the investigation of eye movements in non-reading experiments as well as the evaluation and development of computational models. I present three main results : First, oculomotor phenomena during reading can also be observed in non-reading tasks (Chapter 2 & 4). Oculomotor processes determine the fixation position within an object. The fixation position, in turn, modulates both the next saccade target and the current fixation duration. Second, predicitions of eye-movement models based on sequential attention shifts were falsified (Chapter 3). In fact, our results suggest that distributed processing of multiple objects forms the basis of eye-movement control. Third, fixation durations are under asymmetric control (Chapter 4). While increasing processing demands immediately prolong fixation durations, decreasing processing demands reduce fixation durations only with a temporal delay. We propose a computational model ICAT to account for asymmetric control. In this model, an autonomous timer initiates saccades after random time intervals independent of ongoing processing. However, processing demands that are higher than expected inhibit the execution of the next saccade and, thereby, prolong the current fixation. On the other hand, lower processing demands will not affect the duration before the next saccade is executed. Since the autonomous timer adjusts to expected processing demands from fixation to fixation, a decrease in processing demands may lead to a temporally delayed reduction of fixation durations. In an extended version of ICAT, we evaluated its performance while simulating both temporal and spatial aspects of eye-movement control. The eye-movement phenomena investigated in this thesis have now been observed in a number of different tasks, which suggests that they represent general principles of eye guidance. I propose that distributed processing of the visual input forms the basis of eye-movement control, while fixation durations are controlled by the principles outlined in ICAT. In addition, oculomotor control contributes considerably to the variability observed in eye movements. Interpretations for the relation between eye movements and cognition strongly benefit from a precise understanding of this interplay.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Immune genes of the major histocompatibility complex (MHC) constitute a central component of the adaptive immune system and play an essential role in parasite resistance and associated life-history strategies. In addition to pathogen-mediated selection also sexual selection mechanisms have been identified as the main drivers of the typically-observed high levels of polymorphism in functionally important parts of the MHC. The recognition of the individual MHC constitution is presumed to be mediated through olfactory cues. Indeed, MHC genes are in physical linkage with olfactory receptor genes and alter the individual body odour. Moreover, they are expressed on sperm and trophoplast cells. Thus, MHC-mediated sexual selection processes might not only act in direct mate choice decisions, but also through cryptic processes during reproduction. Bats (Chiroptera) represent the second largest mammalian order and have been identified as important vectors of newly emerging infectious diseases affecting humans and wildlife. In addition, they are interesting study subjects in evolutionary ecology in the context of olfactory communication, mate choice and associated fitness benefits. Thus, it is surprising that Chiroptera belong to the least studied mammalian taxa in terms of their MHC evolution. In my doctoral thesis I aimed to gain insights in the evolution and diversity pattern of functional MHC genes in some of the major New World bat families by establishing species-specific primers through genome-walking into unknown flanking parts of familiar sites. Further, I took a free-ranging population of the lesser bulldog bat (Noctilio albiventris) in Panama as an example to understand the functional importance of the individual MHC constitution in parasite resistance and reproduction as well as the possible underlying selective forces shaping the observed diversity. My studies indicated that the typical MHC characteristics observed in other mammalian orders, like evidence for balancing and positive selection as well as recombination and gene conversion events, are also present in bats shaping their MHC diversity. I found a wide range of copy number variation of expressed DRB loci in the investigated species. In Saccopteryx bilineata, a species with a highly developed olfactory communication system, I found an exceptionally high number of MHC loci duplications generating high levels of variability at the individual level, which has never been described for any other mammalian species so far. My studies included for the first time phylogenetic relationships of MHC genes in bats and I found signs for a family-specific independent mode of evolution of duplicated genes, regardless whether the highly variable exon 2 (coding for the antigen binding region of the molecule) or more conserved exons (3, 4; encoding protein stabilizing parts) were considered indicating a monophyletic origin of duplicated loci within families. This result questions the general assumed pattern of MHC evolution in mammals where duplicated genes of different families usually cluster together suggesting that duplication occurred before speciation took place, which implies a trans-species mode of evolution. However, I found a trans-species mode of evolution within genera (Noctilio, Myotis) based on exon 2 signified by an intermingled clustering of DRB alleles. The gained knowledge on MHC sequence evolution in major New World bat families will facilitate future MHC investigations in this order. In the N. albiventris study population, the single expressed MHC class II DRB gene showed high sequence polymorphism, moderate allelic variability and high levels of population-wide heterozygosity. Whereas demographic processes had minor relevance in shaping the diversity pattern, I found clear evidence for parasite-mediated selection. This was evident by historical positive Darwinian selection maintaining diversity in the functionally important antigen binding sites, and by specific MHC alleles which were associated with low and high ectoparasite burden according to predictions of the ‘frequency dependent selection hypothesis’. Parasite resistance has been suggested to play an important role in mediating costly life history trade-offs leading to e.g. MHC- mediated benefits in sexual selection. The ‘good genes model’ predicts that males with a genetically well-adapted immune system in defending harmful parasites have the ability to allocate more resources to reproductive effort. I found support for this prediction since non-reproductive adult N. albiventris males carried more often an allele associated with high parasite loads, which differentiated them genetically from reproductively active males as well as from subadults, indicating a reduced transmission of this allele in subsequent generations. In addition, they suffered from increased ectoparasite burden which presumably reduced resources to invest in reproduction. Another sign for sexual selection was the observation of gender-specific difference in heterozygosity, with females showing lower levels of heterozygosity than males. This signifies that the sexes differ in their selection pressures, presumably through MHC-mediated molecular processes during reproduction resulting in a male specific heterozygosity advantage. My data make clear that parasite-mediated selection and sexual selection are interactive and operate together to form diversity at the MHC. Furthermore, my thesis is one of the rare studies contributing to fill the gap between MHC-mediated effects on co-evolutionary processes in parasite-host-interactions and on aspects of life-history evolution.
It sometimes happens that we finish reading a passage of text just to realize that we have no idea what we just read. During these episodes of mindless reading our mind is elsewhere yet the eyes still move across the text. The phenomenon of mindless reading is common and seems to be widely recognized in lay psychology. However, the scientific investigation of mindless reading has long been underdeveloped. Recent progress in research on mindless reading has been based on self-report measures and on treating it as an all-or-none phenomenon (dichotomy-hypothesis). Here, we introduce the levels-of-inattention hypothesis proposing that mindless reading is graded and occurs at different levels of cognitive processing. Moreover, we introduce two new behavioral paradigms to study mindless reading at different levels in the eye-tracking laboratory. First (Chapter 2), we introduce shuffled text reading as a paradigm to approximate states of weak mindless reading experimentally and compare it to reading of normal text. Results from statistical analyses of eye movements that subjects perform in this task qualitatively support the ‘mindless’ hypothesis that cognitive influences on eye movements are reduced and the ‘foveal load’ hypothesis that the response of the zoom lens of attention to local text difficulty is enhanced when reading shuffled text. We introduce and validate an advanced version of the SWIFT model (SWIFT 3) incorporating the zoom lens of attention (Chapter 3) and use it to explain eye movements during shuffled text reading. Simulations of the SWIFT 3 model provide fully quantitative support for the ‘mindless’ and the ‘foveal load’ hypothesis. They moreover demonstrate that the zoom lens is an important concept to explain eye movements across reading and mindless reading tasks. Second (Chapter 4), we introduce the sustained attention to stimulus task (SAST) to catch episodes when external attention spontaneously lapses (i.e., attentional decoupling or mind wandering) via the overlooking of errors in the text and via signal detection analyses of error detection. Analyses of eye movements in the SAST revealed reduced influences from cognitive text processing during mindless reading. Based on these findings, we demonstrate that it is possible to predict states of mindless reading from eye movement recordings online. That cognition is not always needed to move the eyes supports autonomous mechanisms for saccade initiation. Results from analyses of error detection and eye movements provide support to our levels-of-inattention hypothesis that errors at different levels of the text assess different levels of decoupling. Analyses of pupil size in the SAST (Chapter 5) provide further support to the levels of inattention hypothesis and to the decoupling hypothesis that off-line thought is a distinct mode of cognitive functioning that demands cognitive resources and is associated with deep levels of decoupling. The present work demonstrates that the elusive phenomenon of mindless reading can be vigorously investigated in the cognitive laboratory and further incorporated in the theoretical framework of cognitive science.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics
(2012)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
In this work, the synthesis of biopolymer-based hydrogel networks with defined architecture is presented. In order to obtain materials with defined properties, the chemoselective copper-catalyzed azide-alkyne cycloaddition (or Click Chemistry) was used for the synthesis of gelatin-based hydrogels. Alkyne-functionalized gelatin was reacted with four different diazide crosslinkers above its sol-gel transition to suppress the formation of triple helices. By variation of the crosslinking density and the crosslinker flexibility, the swelling (Q: 150-470 vol.-%;) and the Young’s and shear moduli (E: 50 kPa - 635 kPa, G’: 0.1 kPa - 16 kPa) could be tuned in the kPa range. In order to understand the network structure, a method based on the labelling of free functional groups within the hydrogel was developed. Gelatin-based hydrogels were incubated with alkyne-functionalized fluorescein to detect the free azide groups, resulting from the formation of dangling chains. Gelatin hydrogels were also incubated with azido-functionalized fluorescein to check the presence of alkyne groups available for the attachment of bioactive molecules. By using confocal laser scanning microscopy and fluorescence spectroscopy, the amount of crosslinking, grafting and free alkyne groups could be determined. Dangling chains were observed in samples prepared by using an excess of crosslinker and also when using equimolar amounts of alkyne:azide. In the latter case the amount of dangling chains was affected by the crosslinker structure. Specifically, 0.1% of dangling chains were found using 4,4’-diazido-2,2’-stilbene-disulfonic acid as cosslinker, 0.06% with 1,8-diazidooctane, 0.05% with 1,12-diazidododecane and 0.022 % with PEG-diazide. This observation could be explained considering the structure of the crosslinkers. During network formation, the movements of the gelatin chains are restricted due to the formation of covalent netpoints. A further crosslinking will be possible only in the case of crosslinker that are flexible and long enough to reach another chain. The method used to obtain defined gelatin-based hydrogels enabled also the synthesis of hyaluronic acid-based hydrogels with tailorable properties. Alkyne-functionalized hyaluronic acid was crosslinked with three different linkers having two terminal azide functionalities. By variation of the crosslinking density and crosslinker type, hydrogels with elastic moduli in the range of 0.5-3 kPa have been prepared. The variation of the crosslinking density and crosslinker type had furthermore an influence also on the hydrolytic and enzymatic degradation of gelatin-based hydrogels. Hydrogels with a low crosslinker amount experienced a faster decrease in mass loss and elastic modulus compared to hydrogels with higher crosslinker content. Moreover, the structure of the crosslinker had a strong influence on the enzymatic degradation. Hydrogels containing a crosslinker with a rigid structure were much more resistant to enzymatic degradation than hydrogels containing a flexible crosslinker. During hydrolytic degradation, the hydrogel became softer while maintaining the same outer dimensions. These observations are in agreement with a bulk degradation mechanism, while the decrease in size of the hydrogels during enzymatic degradation suggested a surface erosion mechanism. Because of the use of small amount of crosslinker (0.002 mol.% 0.02 mol.%) the networks synthesized can still be defined as biopolymer-based hydrogels. However, they contain a small percentage of synthetic residues. Alternatively, a possible method to obtain biopolymer-based telechelics, which could be used as crosslinkers, was investigated. Gelatin-based fragments with defined molecular weight were obtained by controlled degradation of gelatin with hydroxylamine, due to its specific action on asparaginyl-glycine bonds. The reaction of gelatin with hydroxylamine resulted in fragments with molecular weights of 15, 25, 37, and 50 kDa (determined by SDS-PAGE) independently of the reaction time and conditions. Each of these fragments could be potentially used for the synthesis of hydrogels in which all components are biopolymer-based materials.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
We investigate properties of quantum mechanical systems in the light of quantum information theory. We put an emphasize on systems with infinite-dimensional Hilbert spaces, so-called continuous-variable systems'', which are needed to describe quantum optics beyond the single photon regime and other Bosonic quantum systems. We present methods to obtain a description of such systems from a series of measurements in an efficient manner and demonstrate the performance in realistic situations by means of numerical simulations. We consider both unconditional quantum state tomography, which is applicable to arbitrary systems, and tomography of matrix product states. The latter allows for the tomography of many-body systems because the necessary number of measurements scales merely polynomially with the particle number, compared to an exponential scaling in the generic case. We also present a method to realize such a tomography scheme for a system of ultra-cold atoms in optical lattices. Furthermore, we discuss in detail the possibilities and limitations of using continuous-variable systems for measurement-based quantum computing. We will see that the distinction between Gaussian and non-Gaussian quantum states and measurements plays an crucial role. We also provide an algorithm to solve the large and interesting class of naturally occurring Hamiltonians, namely frustration free ones, efficiently and use this insight to obtain a simple approximation method for slightly frustrated systems. To achieve this goals, we make use of, among various other techniques, the well developed theory of matrix product states, tensor networks, semi-definite programming, and matrix analysis.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
A point process is a mechanism, which realizes randomly locally finite point measures. One of the main results of this thesis is an existence theorem for a new class of point processes with a so called signed Levy pseudo measure L, which is an extension of the class of infinitely divisible point processes. The construction approach is a combination of the classical point process theory, as developed by Kerstan, Matthes and Mecke, with the method of cluster expansions from statistical mechanics. Here the starting point is a family of signed Radon measures, which defines on the one hand the Levy pseudo measure L, and on the other hand locally the point process. The relation between L and the process is the following: this point process solves the integral cluster equation determined by L. We show that the results from the classical theory of infinitely divisible point processes carry over in a natural way to the larger class of point processes with a signed Levy pseudo measure. In this way we obtain e.g. a criterium for simplicity and a characterization through the cluster equation, interpreted as an integration by parts formula, for such point processes. Our main result in chapter 3 is a representation theorem for the factorial moment measures of the above point processes. With its help we will identify the permanental respective determinantal point processes, which belong to the classes of Boson respective Fermion processes. As a by-product we obtain a representation of the (reduced) Palm kernels of infinitely divisible point processes. In chapter 4 we see how the existence theorem enables us to construct (infinitely extended) Gibbs, quantum-Bose and polymer processes. The so called polymer processes seem to be constructed here for the first time. In the last part of this thesis we prove that the family of cluster equations has certain stability properties with respect to the transformation of its solutions. At first this will be used to show how large the class of solutions of such equations is, and secondly to establish the cluster theorem of Kerstan, Matthes and Mecke in our setting. With its help we are able to enlarge the class of Polya processes to the so called branching Polya processes. The last sections of this work are about thinning and splitting of point processes. One main result is that the classes of Boson and Fermion processes remain closed under thinning. We use the results on thinning to identify a subclass of point processes with a signed Levy pseudo measure as doubly stochastic Poisson processes. We also pose the following question: Assume you observe a realization of a thinned point process. What is the distribution of deleted points? Surprisingly, the Papangelou kernel of the thinning, besides a constant factor, is given by the intensity measure of this conditional probability, called splitting kernel.
This work is concerned with the characterization of certain classes of stochastic processes via duality formulae. In particular we consider reciprocal processes with jumps, a subject up to now neglected in the literature. In the first part we introduce a new formulation of a characterization of processes with independent increments. This characterization is based on a duality formula satisfied by processes with infinitely divisible increments, in particular Lévy processes, which is well known in Malliavin calculus. We obtain two new methods to prove this duality formula, which are not based on the chaos decomposition of the space of square-integrable function- als. One of these methods uses a formula of partial integration that characterizes infinitely divisible random vectors. In this context, our characterization is a generalization of Stein’s lemma for Gaussian random variables and Chen’s lemma for Poisson random variables. The generality of our approach permits us to derive a characterization of infinitely divisible random measures. The second part of this work focuses on the study of the reciprocal classes of Markov processes with and without jumps and their characterization. We start with a resume of already existing results concerning the reciprocal classes of Brownian diffusions as solutions of duality formulae. As a new contribution, we show that the duality formula satisfied by elements of the reciprocal class of a Brownian diffusion has a physical interpretation as a stochastic Newton equation of motion. Thus we are able to connect the results of characterizations via duality formulae with the theory of stochastic mechanics by our interpretation, and to stochastic optimal control theory by the mathematical approach. As an application we are able to prove an invariance property of the reciprocal class of a Brownian diffusion under time reversal. In the context of pure jump processes we derive the following new results. We describe the reciprocal classes of Markov counting processes, also called unit jump processes, and obtain a characterization of the associated reciprocal class via a duality formula. This formula contains as key terms a stochastic derivative, a compensated stochastic integral and an invariant of the reciprocal class. Moreover we present an interpretation of the characterization of a reciprocal class in the context of stochastic optimal control of unit jump processes. As a further application we show that the reciprocal class of a Markov counting process has an invariance property under time reversal. Some of these results are extendable to the setting of pure jump processes, that is, we admit different jump-sizes. In particular, we show that the reciprocal classes of Markov jump processes can be compared using reciprocal invariants. A characterization of the reciprocal class of compound Poisson processes via a duality formula is possible under the assumption that the jump-sizes of the process are incommensurable.