Refine
Has Fulltext
- yes (147) (remove)
Year of publication
- 2012 (147) (remove)
Document Type
- Doctoral Thesis (64)
- Postprint (38)
- Preprint (27)
- Monograph/Edited Volume (12)
- Master's Thesis (2)
- Working Paper (2)
- Article (1)
- Habilitation Thesis (1)
Language
- English (147) (remove)
Is part of the Bibliography
- yes (147) (remove)
Keywords
- AUTOSAR (2)
- Data Integration (2)
- Datenintegration (2)
- Fernerkundung (2)
- Gibbs point processes (2)
- Heat equation (2)
- Korrosion (2)
- Lake sediments (2)
- Machine Learning (2)
- Magnetismus (2)
Institute
- Institut für Mathematik (30)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Physik und Astronomie (19)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (12)
- Humanwissenschaftliche Fakultät (11)
- Institut für Biochemie und Biologie (11)
- Institut für Chemie (10)
- Institut für Geowissenschaften (10)
- Extern (8)
- Institut für Informatik und Computational Science (7)
The aim of this thesis is the quantum dynamical study of two examples of scanning tunneling microscope (STM)-controllable, Si(100)(2x1) surface-mounted switches of atomic and molecular scale. The first example considers the switching of single H-atoms between two dangling-bond chemisorption sites on a Si-dimer of the Si(100) surface (Grey et al., 1996). The second system examines the conformational switching of single 1,5-cyclooctadiene molecules chemisorbed on the Si(100) surface (Nacci et al., 2008). The temporal dynamics are provided by the propagation of the density matrix in time via an according set of equations of motion (EQM). The latter are based on the open-system density matrix theory in Lindblad form. First order perturbation theory is used to evaluate those transition rates between vibrational levels of the system part. In order to account for interactions with the surface phonons, two different dissipative models are used, namely the bilinear, harmonic and the Ohmic bath model. IET-induced vibrational transitions in the system are due to the dipole- and the resonance-mechanism. A single surface approach is used to study the influence of dipole scattering and resonance scattering in the below-threshold regime. Further, a second electronic surface was included to study the resonance-induced switching in the above-threshold regime. Static properties of the adsorbate, e.g., potentials and dipole function and potentials, are obtained from quantum chemistry and used within the established quantum dynamical models.
Taking advantage of ATRP and using functionalized initiators, different functionalities were introduced in both α and ω chain-ends of synthetic polymers. These functionalized polymers could then go through modular synthetic pathways such as click cycloaddition (copper-catalyzed or copper-free) or amidation to couple synthetic polymers to other synthetic polymers, biomolecules or silica monoliths. Using this general strategy and designing these co/polymers so that they are thermoresponsive, yet bioinert and biocompatible with adjustable cloud point values (as it is the case in the present thesis), the whole generated system becomes "smart" and potentially applicable in different branches. The applications which were considered in the present thesis were in polymer post-functionalization (in situ functionalization of micellar aggregates with low and high molecular weight molecules), hydrophilic/hydrophobic tuning, chromatography and bioconjugation (enzyme thermoprecipitation and recovery, improvement of enzyme activity). Different α-functionalized co/polymers containing cholesterol moiety, aldehyde, t-Boc protected amine, TMS-protected alkyne and NHS-activated ester were designed and synthesized in this work.
Background
To determine the general appearance of normal axillary lymph nodes (LNs) in real-time tissue sonoelastography and to explore the method′s potential value in the prediction of LN metastases.
Methods
Axillary LNs in healthy probands (n=165) and metastatic LNs in breast cancer patients (n=15) were examined with palpation, B-mode ultrasound, Doppler and sonoelastography (assessment of the elasticity of the cortex and the medulla). The elasticity distributions were compared and sensitivity (SE) and specificity (SP) were calculated. In an exploratory analysis, positive and negative predictive values (PPV, NPV) were calculated based upon the estimated prevalence of LN metastases in different risk groups.
Results
In the elastogram, the LN cortex was significantly harder than the medulla in both healthy (p=0.004) and metastatic LNs (p=0.005). Comparing healthy and metastatic LNs, there was no difference in the elasticity distribution of the medulla (p=0.281), but we found a significantly harder cortex in metastatic LNs (p=0.006). The SE of clinical examination, B-mode ultrasound, Doppler ultrasound and sonoelastography was revealed to be 13.3%, 40.0%, 14.3% and 60.0%, respectively, and SP was 88.4%, 96.8%, 95.6% and 79.6%, respectively. The highest SE was achieved by the disjunctive combination of B-mode and elastographic features (cortex >3mm in B-mode or blue cortex in the elastogram, SE=73.3%). The highest SP was achieved by the conjunctive combination of B-mode ultrasound and elastography (cortex >3mm in B-mode and blue cortex in the elastogram, SP=99.3%).
Conclusions
Sonoelastography is a feasible method to visualize the elasticity distribution of LNs. Moreover, sonoelastography is capable of detecting elasticity differences between the cortex and medulla, and between metastatic and healthy LNs. Therefore, sonoelastography yields additional information about axillary LN status and can improve the PPV, although this method is still experimental.
The underlying motivation for the work carried out for this thesis was the growing need for more sustainable technologies. The aim was to synthesize a “palette” of functional nanomaterials using the established technique of hydrothermal carbonization (HTC). The incredible diversity of HTC was demonstrated together with small but steady advances in how HTC can be manipulated to tailor material properties for specific applications. Two main strategies were used to modify the materials obtained by HTC of glucose, a model precursor representing biomass. The first approach was the introduction of heteroatoms, or “doping” of the carbon framework. Sulfur was for the first time introduced as a dopant in hydrothermal carbon. The synthesis of sulfur and sulfur/nitrogen doped microspheres was presented whereby it was shown that the binding state of sulfur could be influenced by varying the type of sulfur source. Pyrolysis may additionally be used to tune the heteroatom binding states which move to more stable motifs with increasing pyrolysis temperature. Importantly, the presence of aromatic binding states in the as synthesized hydrothermal carbon allows for higher heteroatom retention levels after pyrolysis and hence more efficient use of dopant sources. In this regard, HTC may be considered as an “intermediate” step in the formation of conductive heteroatom doped carbon. To assess the novel hydrothermal carbons in terms of their potential for electrochemical applications, materials with defined nano-architectures and high surface areas were synthesized via templated, as well as template-free routes. Sulfur and/or nitrogen doped carbon hollow spheres (CHS) were synthesized using a polystyrene hard templating approach and doped carbon aerogels (CA) were synthesized using either the albumin directed or borax-mediated hydrothermal carbonization of glucose. Electrochemical testing showed that S/N dual doped CHS and aerogels derived via the albumin approach exhibited superior catalytic performance compared to solely nitrogen or sulfur doped counterparts in the oxygen reduction reaction (ORR) relevant to fuel cells. Using the borax mediated aerogel formation, nitrogen content and surface area could be tuned and a carbon aerogel was engineered to maximize electrochemical performance. The obtained sample exhibited drastically improved current densities compared to a platinum catalyst (but lower onset potential), as well as excellent long term stability. In the second approach HTC was carried out at elevated temperatures (550 °C) and pressure (50 bar), corresponding to the superheated vapor regime (htHTC). It was demonstrated that the carbon materials obtained via htHTC are distinct from those obtained via ltHTC and subsequent pyrolysis at 550 °C. No difference in htHTC-derived material properties could be observed between pentoses and hexoses. The material obtained from a polysaccharide exhibited a slightly lower degree of carbonization but was otherwise similar to the monosaccharide derived samples. It was shown that in addition to thermally induced carbonization at 550 °C, the SHV environment exhibits a catalytic effect on the carbonization process. The resulting materials are chemically inert (i.e. they contain a negligible amount of reactive functional groups) and possess low surface area and electronic conductivity which distinguishes them from carbon obtained from pyrolysis. Compared to the materials presented in the previous chapters on chemical modifications of hydrothermal carbon, this makes them ill-suited candidates for electronic applications like lithium ion batteries or electrocatalysts. However, htHTC derived materials could be interesting for applications that require chemical inertness but do not require specific electronic properties. The final section of this thesis therefore revisited the latex hard templating approach to synthesize carbon hollow spheres using htHTC. However, by using htHTC it was possible to carry out template removal in situ because the second heating step at 550 °C was above the polystyrene latex decomposition temperature. Preliminary tests showed that the CHS could be dispersed in an aqueous polystyrene latex without monomer penetrating into the hollow sphere voids. This leaves the stagnant air inside the CHS intact which in turn is promising for their application in heat and sound insulating coatings. Overall the work carried out in this thesis represents a noteworthy development in demonstrating the great potential of sustainable carbon materials.
In the late Palaeozoic fore-arc system of north-central Chile at latitudes 31-32 degrees S (from the west to the east) three lithotectonic units are telescoped within a short distance by a Mesozoic strikeslip event (derived peak P-T conditions in brackets): (1) the basally accreted Choapa Metamorphic Complex (CMC; 350-430 degrees C, 6-9 kbar), (2) the frontally accreted Arrayan Formation (AF; 280-320 degrees C, 4-6 kbar) and (3) the retrowedge basin of the Huentelauquen Formation (HF; 280-320 degrees C, 3-4 kbar). In the CMC, Ar-Ar spot ages locally date white-mica formation at peak P-T conditions and during early exhumation at 279-242 Ma. In a local garnet mica-schist intercalation (570-585 degrees C, 11-13 kbar) Ar-Ar spot ages refer to the ascent from the subduction channel at 307-274 Ma. Portions of the CMC were isobarically heated to 510-580 degrees C at 6.6-8.5 kbar. The age of peak P-T conditions in the AF can only vaguely be approximated at >= 310 Ma by relict fission-track ages consistent with the observation that frontal accretion occurred prior to basal accretion. Zircon fission-track dating indicates cooling below similar to 280 degrees C at similar to 248 Ma in the CMC and the AF, when a regional unconformity also formed. Ar-Ar white-mica spot ages in parts of the CMC and within the entire AF and HF point to heterogeneous resetting during Mesozoic extensional and shortening events at similar to 245-240 Ma, similar to 210-200 Ma, similar to 174-159 Ma and similar to 142-127 Ma. The zircon fission-track ages are locally reset at 109-96 Ma. All resetting of Ar-Ar white-mica ages is proposed to have occurred by in situ dissolution/precipitation at low temperature in the presence of locally penetrating hydrous fluids. Hence syn-and postaccretionary events in the fore-arc system can still be distinguished and dated in spite of its complex heterogeneous postaccretional overprint.
Early acquisition of a second language influences the development of language abilities and cognitive functions. In the present study, we used functional Magnetic Resonance Imaging (fMRI) to investigate the impact of early bilingualism on the organization of the cortical language network during sentence production. Two groups of adult multilinguals, proficient in three languages, were tested on a narrative task; early multilinguals acquired the second language before the age of three years, late multilinguals after the age of nine. All participants learned a third language after nine years of age. Comparison of the two groups revealed substantial differences in language-related brain activity for early as well as late acquired languages. Most importantly, early multilinguals preferentially activated a fronto-striatal network in the left hemisphere, whereas the left posterior superior temporal gyrus (pSTG) was activated to a lesser degree than in late multilinguals. The same brain regions were highlighted in previous studies when a non-target language had to be controlled. Hence the engagement of language control in adult early multilinguals appears to be influenced by the specific learning and acquisition conditions during early childhood. Remarkably, our results reveal that the functional control of early and subsequently later acquired languages is similarly affected, suggesting that language experience has a pervasive influence into adulthood. As such, our findings extend the current understanding of control functions in multilinguals.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research programme on basic human values. The main topic of its second stage was religion in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
Eye movements are a powerful tool to examine cognitive processes. However, in most paradigms little is known about the dynamics present in sequences of saccades and fixations. In particular, the control of fixation durations has been widely neglected in most tasks. As a notable exception, both spatial and temporal aspects of eye-movement control have been thoroughly investigated during reading. There, the scientific discourse was dominated by three controversies, (i), the role of oculomotor vs. cognitive processing on eye-movement control, (ii) the serial vs. parallel processing of words, and, (iii), the control of fixation durations. The main purpose of this thesis was to investigate eye movements in tasks that require sequences of fixations and saccades. While reading phenomena served as a starting point, we examined eye guidance in non-reading tasks with the aim to identify general principles of eye-movement control. In addition, the investigation of eye movements in non-reading tasks helped refine our knowledge about eye-movement control during reading. Our approach included the investigation of eye movements in non-reading experiments as well as the evaluation and development of computational models. I present three main results : First, oculomotor phenomena during reading can also be observed in non-reading tasks (Chapter 2 & 4). Oculomotor processes determine the fixation position within an object. The fixation position, in turn, modulates both the next saccade target and the current fixation duration. Second, predicitions of eye-movement models based on sequential attention shifts were falsified (Chapter 3). In fact, our results suggest that distributed processing of multiple objects forms the basis of eye-movement control. Third, fixation durations are under asymmetric control (Chapter 4). While increasing processing demands immediately prolong fixation durations, decreasing processing demands reduce fixation durations only with a temporal delay. We propose a computational model ICAT to account for asymmetric control. In this model, an autonomous timer initiates saccades after random time intervals independent of ongoing processing. However, processing demands that are higher than expected inhibit the execution of the next saccade and, thereby, prolong the current fixation. On the other hand, lower processing demands will not affect the duration before the next saccade is executed. Since the autonomous timer adjusts to expected processing demands from fixation to fixation, a decrease in processing demands may lead to a temporally delayed reduction of fixation durations. In an extended version of ICAT, we evaluated its performance while simulating both temporal and spatial aspects of eye-movement control. The eye-movement phenomena investigated in this thesis have now been observed in a number of different tasks, which suggests that they represent general principles of eye guidance. I propose that distributed processing of the visual input forms the basis of eye-movement control, while fixation durations are controlled by the principles outlined in ICAT. In addition, oculomotor control contributes considerably to the variability observed in eye movements. Interpretations for the relation between eye movements and cognition strongly benefit from a precise understanding of this interplay.
For a sequence of Hilbert spaces and continuous linear operators the curvature is defined to be the composition of any two consecutive operators. This is modeled on the de Rham resolution of a connection on a module over an algebra. Of particular interest are those sequences for which the curvature is "small" at each step, e.g., belongs to a fixed operator ideal. In this context we elaborate the theory of Fredholm sequences and show how to introduce the Lefschetz number.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2012)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Irrwege der Klimapolitik
(2012)
Inhalt I. Einleitung II. Es gibt kein Normalklima III. Folgen des Klimawandel IV. Folgen der Klimapolitik V. Schlußfolgerungen
This thesis deals with two theories of international trade: the theory of comparative advantage, which is connected to the name David Ricardo and is dominating current trade theory, and Adam Smith’s theory of absolute advantage. Both theories are compared and their assumptions are scrutinised. The former theory is rejected on theoretical and empirical grounds in favour of the latter. On the basis of the theory of absolute advantage, developments of free international trade are examined, whereby the focus is on trade between industrial and underdeveloped countries. The main conclusions are that trade patterns are determined by absolute production cost advantages and that the gap between developed and poor countries is not reduced but rather increased by free trade.
This thesis is focussed on the electronic properties of the new material class named topological insulators. Spin and angle resolved photoelectron spectroscopy have been applied to reveal several unique properties of the surface state of these materials. The first part of this thesis introduces the methodical background of these quite established experimental techniques.
In the following chapter, the theoretical concept of topological insulators is introduced. Starting from the prominent example of the quantum Hall effect, the application of topological invariants to classify material systems is illuminated. It is explained how, in presence of time reversal symmetry, which is broken in the quantum Hall phase, strong spin orbit coupling can drive a system into a topologically non trivial phase. The prediction of the spin quantum Hall effect in two dimensional insulators an the generalization to the three dimensional case of topological insulators is reviewed together with the first experimental realization of a three dimensional topological insulator in the Bi1-xSbx alloys given in the literature.
The experimental part starts with the introduction of the Bi2X3 (X=Se, Te) family of materials. Recent theoretical predictions and experimental findings on the bulk and surface electronic structure of these materials are introduced in close discussion to our own experimental results. Furthermore, it is revealed, that the topological surface state of Bi2Te3 shares its orbital symmetry with the bulk valence band and the observation of a temperature induced shift of the chemical potential is to a high probability unmasked as a doping effect due to residual gas adsorption.
The surface state of Bi2Te3 is found to be highly spin polarized with a polarization value of about 70% in a macroscopic area, while in Bi2Se3 the polarization appears reduced, not exceeding 50%. We, however, argue that the polarization is most likely only extrinsically limited in terms of the finite angular resolution and the lacking detectability of the out of plane component of the electron spin. A further argument is based on the reduced surface quality of the single crystals after cleavage and, for Bi2Se3 a sensitivity of the electronic structure to photon exposure.
We probe the robustness of the topological surface state in Bi2X3 against surface impurities in Chapter 5. This robustness is provided through the protection by the time reversal symmetry. Silver, deposited on the (111) surface of Bi2Se3 leads to a strong electron doping but the surface state is observed up to a deposited Ag mass equivalent to one atomic monolayer. The opposite sign of doping, i.e., hole-like, is observed by exposing oxygen to Bi2Te3. But while the n-type shift of Ag on Bi2Se3 appears to be more or less rigid, O2 is lifting the Dirac point of the topological surface state in Bi2Te3 out of the valence band minimum at $\Gamma$. After increasing the oxygen dose further, it is possible to shift the Dirac point to the Fermi level, while the valence band stays well beyond. The effect is found reversible, by warming up the samples which is interpreted in terms of physisorption of O2.
For magnetic impurities, i.e., Fe, we find a similar behavior as for the case of Ag in both Bi2Se3 and Bi2Te3. However, in that case the robustness is unexpected, since magnetic impurities are capable to break time reversal symmetry which should introduce a gap in the surface state at the Dirac point which in turn removes the protection. We argue, that the fact that the surface state shows no gap must be attributed to a missing magnetization of the Fe overlayer. In Bi2Te3 we are able to observe the surface state for deposited iron mass equivalents in the monolayer regime. Furthermore, we gain control over the sign of doping through the sample temperature during deposition.
Chapter6 is devoted to the lifetime broadening of the photoemission signal from the topological surface states of Bi2Se3 and Bi2Te3. It is revealed that the hexagonal warping of the surface state in Bi2Te3 introduces an anisotropy for electrons traveling along the two distinct high symmetry directions of the surface Brillouin zone, i.e., $\Gamma$K and $\Gamma$M. We show that the phonon coupling strength to the surface electrons in Bi2Te3 is in nice agreement with the theoretical prediction but, nevertheless, higher than one may expect. We argue that the electron-phonon coupling is one of the main contributions to the decay of photoholes but the relatively small size of the Fermi surface limits the number of phonon modes that may scatter off electrons. This effect is manifested in the energy dependence of the imaginary part of the electron self energy of the surface state which shows a decay to higher binding energies in contrast to the monotonic increase proportional to E$^2$ in the Fermi liquid theory due to electron-electron interaction.
Furthermore, the effect of the surface impurities of Chapter 5 on the quasiparticle life- times is investigated. We find that Fe impurities have a much stronger influence on the lifetimes as compared to Ag. Moreover, we find that the influence is stronger independently of the sign of the doping. We argue that this observation suggests a minor contribution of the warping on increased scattering rates in contrast to current belief. This is additionally confirmed by the observation that the scattering rates increase further with increasing silver amount while the doping stays constant and by the fact that clean Bi2Se3 and Bi2Te3 show very similar scattering rates regardless of the much stronger warping in Bi2Te3.
In the last chapter we report on a strong circular dichroism in the angle distribution of the photoemission signal of the surface state of Bi2Te3. We show that the color pattern obtained by calculating the difference between photoemission intensities measured with opposite photon helicity reflects the pattern expected for the spin polarization. However, we find a strong influence on strength and even sign of the effect when varying the photon energy. The sign change is qualitatively confirmed by means of one-step photoemission calculations conducted by our collaborators from the LMU München, while the calculated spin polarization is found to be independent of the excitation energy. Experiment and theory together unambiguously uncover the dichroism in these systems as a final state effect and the question in the title of the chapter has to be negated: Circular dichroism in the angle distribution is not a new spin sensitive technique.
Immune genes of the major histocompatibility complex (MHC) constitute a central component of the adaptive immune system and play an essential role in parasite resistance and associated life-history strategies. In addition to pathogen-mediated selection also sexual selection mechanisms have been identified as the main drivers of the typically-observed high levels of polymorphism in functionally important parts of the MHC. The recognition of the individual MHC constitution is presumed to be mediated through olfactory cues. Indeed, MHC genes are in physical linkage with olfactory receptor genes and alter the individual body odour. Moreover, they are expressed on sperm and trophoplast cells. Thus, MHC-mediated sexual selection processes might not only act in direct mate choice decisions, but also through cryptic processes during reproduction. Bats (Chiroptera) represent the second largest mammalian order and have been identified as important vectors of newly emerging infectious diseases affecting humans and wildlife. In addition, they are interesting study subjects in evolutionary ecology in the context of olfactory communication, mate choice and associated fitness benefits. Thus, it is surprising that Chiroptera belong to the least studied mammalian taxa in terms of their MHC evolution. In my doctoral thesis I aimed to gain insights in the evolution and diversity pattern of functional MHC genes in some of the major New World bat families by establishing species-specific primers through genome-walking into unknown flanking parts of familiar sites. Further, I took a free-ranging population of the lesser bulldog bat (Noctilio albiventris) in Panama as an example to understand the functional importance of the individual MHC constitution in parasite resistance and reproduction as well as the possible underlying selective forces shaping the observed diversity. My studies indicated that the typical MHC characteristics observed in other mammalian orders, like evidence for balancing and positive selection as well as recombination and gene conversion events, are also present in bats shaping their MHC diversity. I found a wide range of copy number variation of expressed DRB loci in the investigated species. In Saccopteryx bilineata, a species with a highly developed olfactory communication system, I found an exceptionally high number of MHC loci duplications generating high levels of variability at the individual level, which has never been described for any other mammalian species so far. My studies included for the first time phylogenetic relationships of MHC genes in bats and I found signs for a family-specific independent mode of evolution of duplicated genes, regardless whether the highly variable exon 2 (coding for the antigen binding region of the molecule) or more conserved exons (3, 4; encoding protein stabilizing parts) were considered indicating a monophyletic origin of duplicated loci within families. This result questions the general assumed pattern of MHC evolution in mammals where duplicated genes of different families usually cluster together suggesting that duplication occurred before speciation took place, which implies a trans-species mode of evolution. However, I found a trans-species mode of evolution within genera (Noctilio, Myotis) based on exon 2 signified by an intermingled clustering of DRB alleles. The gained knowledge on MHC sequence evolution in major New World bat families will facilitate future MHC investigations in this order. In the N. albiventris study population, the single expressed MHC class II DRB gene showed high sequence polymorphism, moderate allelic variability and high levels of population-wide heterozygosity. Whereas demographic processes had minor relevance in shaping the diversity pattern, I found clear evidence for parasite-mediated selection. This was evident by historical positive Darwinian selection maintaining diversity in the functionally important antigen binding sites, and by specific MHC alleles which were associated with low and high ectoparasite burden according to predictions of the ‘frequency dependent selection hypothesis’. Parasite resistance has been suggested to play an important role in mediating costly life history trade-offs leading to e.g. MHC- mediated benefits in sexual selection. The ‘good genes model’ predicts that males with a genetically well-adapted immune system in defending harmful parasites have the ability to allocate more resources to reproductive effort. I found support for this prediction since non-reproductive adult N. albiventris males carried more often an allele associated with high parasite loads, which differentiated them genetically from reproductively active males as well as from subadults, indicating a reduced transmission of this allele in subsequent generations. In addition, they suffered from increased ectoparasite burden which presumably reduced resources to invest in reproduction. Another sign for sexual selection was the observation of gender-specific difference in heterozygosity, with females showing lower levels of heterozygosity than males. This signifies that the sexes differ in their selection pressures, presumably through MHC-mediated molecular processes during reproduction resulting in a male specific heterozygosity advantage. My data make clear that parasite-mediated selection and sexual selection are interactive and operate together to form diversity at the MHC. Furthermore, my thesis is one of the rare studies contributing to fill the gap between MHC-mediated effects on co-evolutionary processes in parasite-host-interactions and on aspects of life-history evolution.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Background
Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome.
Methods
From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation.
Results
The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale).
Conclusion
The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.
Mineral chemistry and thermobarometry of the staurolite-chloritoid schists from Poshtuk, NW Iran
(2012)
The Poshtuk metapelitic rocks in northwestern Iran underwent two main phases of regional and contact metamorphism. Microstructures, textural features and field relations indicate that these rocks underwent a polymetamorphic history. The dominant metamorphic assemblage of the metapelites is garnet, staurolite, chloritoid, chlorite, muscovite and quartz, which grew mainly syntectonically during the later contact metamorphic event. Peak metamorphic conditions of this event took place at 580 ◦ C and ∼ 3–4 kbar, indicating that this event occurred under high-temperature and low-pressure conditions (HT/LP metamorphism), which reflects the high heat flow in this part of the crust. This event is mainly controlled by advective heat input through magmatic intrusions into all levels of the crust. These extensive Eocene metamorphic and magmatic activities can be associated with the early Alpine Orogeny, which resulted in this area from the convergence between the Arabian and Eurasian plates, and the Cenozoic closure of the Tethys oceanic tract(s).
This study investigates phenomena that have been claimed to be indicative of Specific Language Impairment (SLI) in German, focusing on subject-verb agreement marking. Longitudinal data from fourteen German-speaking children with SLI, seven monolingual and seven Turkish-German successive bilingual children, were examined. We found similar patterns of impairment in the two participant groups. Both the monolingual and the bilingual children with SLI had correct (present vs. preterit) tense marking and produced syntactically complex sentences such as embedded clauses and wh-questions, but were limited in reliably producing correct agreement-marked verb forms. These contrasts indicate that agreement marking is impaired in German-speaking children with SLI, without any necessary concurrent deficits in either the CP-domain or in tense marking. Our results also show that it is possible to identify SLI from an early successive bilingual child's performance in one of her two languages.
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics
(2012)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
This paper examines and develops matrix methods to approximate the eigenvalues of a fourth order Sturm-Liouville problem subjected to a kind of fixed boundary conditions, furthermore, it extends the matrix methods for a kind of general boundary conditions. The idea of the methods comes from finite difference and Numerov's method as well as boundary value methods for second order regular Sturm-Liouville problems. Moreover, the determination of the correction term formulas of the matrix methods are investigated in order to obtain better approximations of the problem with fixed boundary conditions since the exact eigenvalues for q = 0 are known in this case. Finally, some numerical examples are illustrated.
This article investigates the nature of preposition copying and preposition pruning structures in present-day English. We begin by illustrating the two phenomena and consider how they might be accounted for in syntactic terms, and go on to explore the possibility that preposition copying and pruning arise for processing reasons. We then report on two acceptability judgement experiments examining the extent to which native speakers of English are sensitive to these types of 'error' in language comprehension. Our results indicate that preposition copying creates redundancy rather than ungrammaticality, whereas preposition pruning creates processing problems for comprehenders that may render it unacceptable in timed (but not necessarily in untimed) judgement tasks. Our findings furthermore illustrate the usefulness of combining corpus studies and experimentally elicited data for gaining a clearer picture of usage and acceptability, and the potential benefits of examining syntactic phenomena from both a theoretical and a processing perspective.
The size of plant organs, such as leaves and flowers, is determined by an interaction of genotype and environmental influences. Organ growth occurs through the two successive processes of cell proliferation followed by cell expansion. A number of genes influencing either or both of these processes and thus contributing to the control of final organ size have been identified in the last decade. Although the overall picture of the genetic regulation of organ size remains fragmentary, two transcription factor/microRNA-based genetic pathways are emerging in the control of cell proliferation. However, despite this progress, fundamental questions remain unanswered, such as the problem of how the size of a growing organ could be monitored to determine the appropriate time for terminating growth. While genetic analysis will undoubtedly continue to advance our knowledge about size control in plants, a deeper understanding of this and other basic questions will require including advanced live-imaging and mathematical modeling, as impressively demonstrated by some recent examples. This should ultimately allow the comparison of the mechanisms underlying size control in plants and in animals to extract common principles and lineage-specific solutions.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
In this work, the synthesis of biopolymer-based hydrogel networks with defined architecture is presented. In order to obtain materials with defined properties, the chemoselective copper-catalyzed azide-alkyne cycloaddition (or Click Chemistry) was used for the synthesis of gelatin-based hydrogels. Alkyne-functionalized gelatin was reacted with four different diazide crosslinkers above its sol-gel transition to suppress the formation of triple helices. By variation of the crosslinking density and the crosslinker flexibility, the swelling (Q: 150-470 vol.-%;) and the Young’s and shear moduli (E: 50 kPa - 635 kPa, G’: 0.1 kPa - 16 kPa) could be tuned in the kPa range. In order to understand the network structure, a method based on the labelling of free functional groups within the hydrogel was developed. Gelatin-based hydrogels were incubated with alkyne-functionalized fluorescein to detect the free azide groups, resulting from the formation of dangling chains. Gelatin hydrogels were also incubated with azido-functionalized fluorescein to check the presence of alkyne groups available for the attachment of bioactive molecules. By using confocal laser scanning microscopy and fluorescence spectroscopy, the amount of crosslinking, grafting and free alkyne groups could be determined. Dangling chains were observed in samples prepared by using an excess of crosslinker and also when using equimolar amounts of alkyne:azide. In the latter case the amount of dangling chains was affected by the crosslinker structure. Specifically, 0.1% of dangling chains were found using 4,4’-diazido-2,2’-stilbene-disulfonic acid as cosslinker, 0.06% with 1,8-diazidooctane, 0.05% with 1,12-diazidododecane and 0.022 % with PEG-diazide. This observation could be explained considering the structure of the crosslinkers. During network formation, the movements of the gelatin chains are restricted due to the formation of covalent netpoints. A further crosslinking will be possible only in the case of crosslinker that are flexible and long enough to reach another chain. The method used to obtain defined gelatin-based hydrogels enabled also the synthesis of hyaluronic acid-based hydrogels with tailorable properties. Alkyne-functionalized hyaluronic acid was crosslinked with three different linkers having two terminal azide functionalities. By variation of the crosslinking density and crosslinker type, hydrogels with elastic moduli in the range of 0.5-3 kPa have been prepared. The variation of the crosslinking density and crosslinker type had furthermore an influence also on the hydrolytic and enzymatic degradation of gelatin-based hydrogels. Hydrogels with a low crosslinker amount experienced a faster decrease in mass loss and elastic modulus compared to hydrogels with higher crosslinker content. Moreover, the structure of the crosslinker had a strong influence on the enzymatic degradation. Hydrogels containing a crosslinker with a rigid structure were much more resistant to enzymatic degradation than hydrogels containing a flexible crosslinker. During hydrolytic degradation, the hydrogel became softer while maintaining the same outer dimensions. These observations are in agreement with a bulk degradation mechanism, while the decrease in size of the hydrogels during enzymatic degradation suggested a surface erosion mechanism. Because of the use of small amount of crosslinker (0.002 mol.% 0.02 mol.%) the networks synthesized can still be defined as biopolymer-based hydrogels. However, they contain a small percentage of synthetic residues. Alternatively, a possible method to obtain biopolymer-based telechelics, which could be used as crosslinkers, was investigated. Gelatin-based fragments with defined molecular weight were obtained by controlled degradation of gelatin with hydroxylamine, due to its specific action on asparaginyl-glycine bonds. The reaction of gelatin with hydroxylamine resulted in fragments with molecular weights of 15, 25, 37, and 50 kDa (determined by SDS-PAGE) independently of the reaction time and conditions. Each of these fragments could be potentially used for the synthesis of hydrogels in which all components are biopolymer-based materials.
We consider compact Riemannian spin manifolds without boundary equipped with orthogonal connections. We investigate the induced Dirac operators and the associated commutative spectral triples. In case of dimension four and totally anti-symmetric torsion we compute the Chamseddine-Connes spectral action, deduce the equations of motions and discuss critical points.
We consider orthogonal connections with arbitrary torsion on compact Riemannian manifolds. For the induced Dirac operators, twisted Dirac operators and Dirac operators of Chamseddine-Connes type we compute the spectral action. In addition to the Einstein-Hilbert action and the bosonic part of the Standard Model Lagrangian we find the Holst term from Loop Quantum Gravity, a coupling of the Holst term to the scalar curvature and a prediction for the value of the Barbero-Immirzi parameter.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
SXP 1062 is an exceptional case of a young neutron star in a wind-fed high-mass X-ray binary associated with a supernova remnant. A unique combination of measured spin period, its derivative, luminosity and young age makes this source a key probe for the physics of accretion and neutron star evolution. Theoretical models proposed to explain the properties of SXP 1062 shall be tested with new data.
The clumping of massive star winds is an established paradigm, which is confirmed by multiple lines of evidence and is supported by stellar wind theory. We use the results from time-dependent hydrodynamical models of the instability in the line-driven wind of a massive supergiant star to derive the time-dependent accretion rate on to a compact object in the Bondi-Hoyle-Lyttleton approximation. The strong density and velocity fluctuations in the wind result in strong variability of the synthetic X-ray light curves. Photoionization of inhomogeneous winds is different from the photoinization of smooth winds. The degree of ionization is affected by the wind clumping. The wind clumping must also be taken into account when comparing the observed and model spectra of the photoionized stellar wind.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
We investigate properties of quantum mechanical systems in the light of quantum information theory. We put an emphasize on systems with infinite-dimensional Hilbert spaces, so-called continuous-variable systems'', which are needed to describe quantum optics beyond the single photon regime and other Bosonic quantum systems. We present methods to obtain a description of such systems from a series of measurements in an efficient manner and demonstrate the performance in realistic situations by means of numerical simulations. We consider both unconditional quantum state tomography, which is applicable to arbitrary systems, and tomography of matrix product states. The latter allows for the tomography of many-body systems because the necessary number of measurements scales merely polynomially with the particle number, compared to an exponential scaling in the generic case. We also present a method to realize such a tomography scheme for a system of ultra-cold atoms in optical lattices. Furthermore, we discuss in detail the possibilities and limitations of using continuous-variable systems for measurement-based quantum computing. We will see that the distinction between Gaussian and non-Gaussian quantum states and measurements plays an crucial role. We also provide an algorithm to solve the large and interesting class of naturally occurring Hamiltonians, namely frustration free ones, efficiently and use this insight to obtain a simple approximation method for slightly frustrated systems. To achieve this goals, we make use of, among various other techniques, the well developed theory of matrix product states, tensor networks, semi-definite programming, and matrix analysis.
Recent PIC simulations of relativistic electron-positron (electron-ion) jets injected into a stationary medium show that particle acceleration occurs in the shocked regions. Simulations show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields and for particle acceleration. These magnetic fields contribute to the electron’s transverse eflection behind the shock. The “jitter” radiation from deflected electrons in turbulent magnetic fields has properties different from synchrotron radiation calculated in a uniform magnetic field. This jitter radiation may be important for understanding the complex time evolution and/or spectral structure of gamma-ray bursts, relativistic jets in general, and supernova remnants. In order to calculate radiation from first principles and go beyond the standard synchrotron model, we have used PIC simulations. We present synthetic spectra to compare with the spectra obtained from Fermi observations.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
A point process is a mechanism, which realizes randomly locally finite point measures. One of the main results of this thesis is an existence theorem for a new class of point processes with a so called signed Levy pseudo measure L, which is an extension of the class of infinitely divisible point processes. The construction approach is a combination of the classical point process theory, as developed by Kerstan, Matthes and Mecke, with the method of cluster expansions from statistical mechanics. Here the starting point is a family of signed Radon measures, which defines on the one hand the Levy pseudo measure L, and on the other hand locally the point process. The relation between L and the process is the following: this point process solves the integral cluster equation determined by L. We show that the results from the classical theory of infinitely divisible point processes carry over in a natural way to the larger class of point processes with a signed Levy pseudo measure. In this way we obtain e.g. a criterium for simplicity and a characterization through the cluster equation, interpreted as an integration by parts formula, for such point processes. Our main result in chapter 3 is a representation theorem for the factorial moment measures of the above point processes. With its help we will identify the permanental respective determinantal point processes, which belong to the classes of Boson respective Fermion processes. As a by-product we obtain a representation of the (reduced) Palm kernels of infinitely divisible point processes. In chapter 4 we see how the existence theorem enables us to construct (infinitely extended) Gibbs, quantum-Bose and polymer processes. The so called polymer processes seem to be constructed here for the first time. In the last part of this thesis we prove that the family of cluster equations has certain stability properties with respect to the transformation of its solutions. At first this will be used to show how large the class of solutions of such equations is, and secondly to establish the cluster theorem of Kerstan, Matthes and Mecke in our setting. With its help we are able to enlarge the class of Polya processes to the so called branching Polya processes. The last sections of this work are about thinning and splitting of point processes. One main result is that the classes of Boson and Fermion processes remain closed under thinning. We use the results on thinning to identify a subclass of point processes with a signed Levy pseudo measure as doubly stochastic Poisson processes. We also pose the following question: Assume you observe a realization of a thinned point process. What is the distribution of deleted points? Surprisingly, the Papangelou kernel of the thinning, besides a constant factor, is given by the intensity measure of this conditional probability, called splitting kernel.
This work is concerned with the characterization of certain classes of stochastic processes via duality formulae. In particular we consider reciprocal processes with jumps, a subject up to now neglected in the literature. In the first part we introduce a new formulation of a characterization of processes with independent increments. This characterization is based on a duality formula satisfied by processes with infinitely divisible increments, in particular Lévy processes, which is well known in Malliavin calculus. We obtain two new methods to prove this duality formula, which are not based on the chaos decomposition of the space of square-integrable function- als. One of these methods uses a formula of partial integration that characterizes infinitely divisible random vectors. In this context, our characterization is a generalization of Stein’s lemma for Gaussian random variables and Chen’s lemma for Poisson random variables. The generality of our approach permits us to derive a characterization of infinitely divisible random measures. The second part of this work focuses on the study of the reciprocal classes of Markov processes with and without jumps and their characterization. We start with a resume of already existing results concerning the reciprocal classes of Brownian diffusions as solutions of duality formulae. As a new contribution, we show that the duality formula satisfied by elements of the reciprocal class of a Brownian diffusion has a physical interpretation as a stochastic Newton equation of motion. Thus we are able to connect the results of characterizations via duality formulae with the theory of stochastic mechanics by our interpretation, and to stochastic optimal control theory by the mathematical approach. As an application we are able to prove an invariance property of the reciprocal class of a Brownian diffusion under time reversal. In the context of pure jump processes we derive the following new results. We describe the reciprocal classes of Markov counting processes, also called unit jump processes, and obtain a characterization of the associated reciprocal class via a duality formula. This formula contains as key terms a stochastic derivative, a compensated stochastic integral and an invariant of the reciprocal class. Moreover we present an interpretation of the characterization of a reciprocal class in the context of stochastic optimal control of unit jump processes. As a further application we show that the reciprocal class of a Markov counting process has an invariance property under time reversal. Some of these results are extendable to the setting of pure jump processes, that is, we admit different jump-sizes. In particular, we show that the reciprocal classes of Markov jump processes can be compared using reciprocal invariants. A characterization of the reciprocal class of compound Poisson processes via a duality formula is possible under the assumption that the jump-sizes of the process are incommensurable.
In this work we are concerned with the characterization of certain classes of stochastic processes via duality formulae. First, we introduce a new formulation of a characterization of processes with independent increments, which is based on an integration by parts formula satisfied by infinitely divisible random vectors. Then we focus on the study of the reciprocal classes of Markov processes. These classes contain all stochastic processes having the same bridges, and thus similar dynamics, as a reference Markov process. We start with a resume of some existing results concerning the reciprocal classes of Brownian diffusions as solutions of duality formulae. As a new contribution, we show that the duality formula satisfied by elements of the reciprocal class of a Brownian diffusion has a physical interpretation as a stochastic Newton equation of motion. In the context of pure jump processes we derive the following new results. We will analyze the reciprocal classes of Markov counting processes and characterize them as a group of stochastic processes satisfying a duality formula. This result is applied to time-reversal of counting processes. We are able to extend some of these results to pure jump processes with different jump-sizes, in particular we are able to compare the reciprocal classes of Markov pure jump processes through a functional equation between the jump-intensities.
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
This work investigates diffusion in nonlinear Hamiltonian systems. The diffusion, more precisely subdiffusion, in such systems is induced by the intrinsic chaotic behavior of trajectories and thus is called chaotic diffusion''. Its properties are studied on the example of one- or two-dimensional lattices of harmonic or nonlinear oscillators with nearest neighbor couplings. The fundamental observation is the spreading of energy for localized initial conditions. Methods of quantifying this spreading behavior are presented, including a new quantity called excitation time. This new quantity allows for a more precise analysis of the spreading than traditional methods. Furthermore, the nonlinear diffusion equation is introduced as a phenomenologic description of the spreading process and a number of predictions on the density dependence of the spreading are drawn from this equation. Two mathematical techniques for analyzing nonlinear Hamiltonian systems are introduced. The first one is based on a scaling analysis of the Hamiltonian equations and the results are related to similar scaling properties of the NDE. From this relation, exact spreading predictions are deduced. Secondly, the microscopic dynamics at the edge of spreading states are thoroughly analyzed, which again suggests a scaling behavior that can be related to the NDE. Such a microscopic treatment of chaotically spreading states in nonlinear Hamiltonian systems has not been done before and the results present a new technique of connecting microscopic dynamics with macroscopic descriptions like the nonlinear diffusion equation. All theoretical results are supported by heavy numerical simulations, partly obtained on one of Europe's fastest supercomputers located in Bologna, Italy. In the end, the highly interesting case of harmonic oscillators with random frequencies and nonlinear coupling is studied, which resembles to some extent the famous Discrete Anderson Nonlinear Schroedinger Equation. For this model, a deviation from the widely believed power-law spreading is observed in numerical experiments. Some ideas on a theoretical explanation for this deviation are presented, but a conclusive theory could not be found due to the complicated phase space structure in this case. Nevertheless, it is hoped that the techniques and results presented in this work will help to eventually understand this controversely discussed case as well.
The safe upper limit for inclusion of vitamin A in complete diets for growing dogs is uncertain, with the result that current recommendations range from 5.24 to 104.80 mu mol retinol (5000 to 100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy (ME). The aim of the present study was to determine the effect of feeding four concentrations of vitamin A to puppies from weaning until 1 year of age. A total of forty-nine puppies, of two breeds, Labrador Retriever and Miniature Schnauzer, were randomly assigned to one of four treatment groups. Following weaning at 8 weeks of age, puppies were fed a complete food supplemented with retinyl acetate diluted in vegetable oil and fed at 1ml oil/100 g diet to achieve an intake of 5.24, 13.10, 78.60 and 104.80 mu mol retinol (5000, 12 500, 75 000 and 100 000 IU vitamin A)/4184 kJ (1000 kcal) ME. Fasted blood and urine samples were collected at 8, 10, 12, 14, 16, 20, 26, 36 and 52 weeks of age and analysed for markers of vitamin A metabolism and markers of safety including haematological and biochemical variables, bone-specific alkaline phosphatase, cross-linked carboxyterminal telopeptides of type I collagen and dual-energy X-ray absorptiometry. Clinical examinations were conducted every 4 weeks. Data were analysed by means of a mixed model analysis with Bonferroni corrections for multiple endpoints. There was no effect of vitamin A concentration on any of the parameters, with the exception of total serum retinyl esters, and no effect of dose on the number, type and duration of adverse events. We therefore propose that 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal) is a suitable safe upper limit for use in the formulation of diets designed for puppy growth.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
This work describes the synthesis and characterization of stimuli-responsive polymers made by reversible addition-fragmentation chain transfer (RAFT) polymerization and the investigation of their self-assembly into “smart” hydrogels. In particular the hydrogels were designed to swell at low temperature and could be reversibly switched to a collapsed hydrophobic state by rising the temperature. Starting from two constituents, a short permanently hydrophobic polystyrene (PS) block and a thermo-responsive poly(methoxy diethylene glycol acrylate) (PMDEGA) block, various gelation behaviors and switching temperatures were achieved. New RAFT agents bearing tert-butyl benzoate or benzoic acid groups, were developed for the synthesis of diblock, symmetrical triblock and 3-arm star block copolymers. Thus, specific end groups were attached to the polymers that facilitate efficient macromolecular characterization, e.g by routine 1H-NMR spectroscopy. Further, the carboxyl end-groups allowed functionalizing the various polymers by a fluorophore. Because reports on PMDEGA have been extremely rare, at first, the thermo-responsive behavior of the polymer was investigated and the influence of factors such as molar mass, nature of the end-groups, and architecture, was studied. The use of special RAFT agents enabled the design of polymer with specific hydrophobic and hydrophilic end-groups. Cloud points (CP) of the polymers proved to be sensitive to all molecular variables studied, namely molar mass, nature and number of the end-groups, up to relatively high molar masses. Thus, by changing molecular parameters, CPs of the PMDEGA could be easily adjusted within the physiological interesting range of 20 to 40°C. A second responsivity, namely to light, was added to the PMDEGA system via random copolymerization of MDEGA with a specifically designed photo-switchable azobenzene acrylate. The composition of the copolymers was varied in order to determine the optimal conditions for an isothermal cloud point variation triggered by light. Though reversible light-induced solubility changes were achieved, the differences between the cloud points before and after the irradiation were small. Remarkably, the response to light differed from common observations for azobenzene-based systems, as CPs decreased after UV-irradiation, i.e with increasing content of cis-azobenzene units. The viscosifying and gelling abilities of the various block copolymers made from PS and PMDEGA blocks were studied by rheology. Important differences were observed between diblock copolymers, containing one hydrophobic PS block only, the telechelic symmetrical triblock copolymers made of two associating PS termini, and the star block copolymers having three associating end blocks. Regardless of their hydrophilic block length, diblock copolymers PS11 PMDEGAn were freely flowing even at concentrations as high as 40 wt. %. In contrast, all studied symmetrical triblock copolymers PS8-PMDEGAn-PS8 formed gels at low temperatures and at concentrations as low as 3.5 wt. % at best. When heated, these gels underwent a gel-sol transition at intermediate temperatures, well below the cloud point where phase separation occurs. The gel-sol transition shifted to markedly higher transition temperatures with increasing length of the hydrophilic inner block. This effect increased also with the number of arms, and with the length of the hydrophobic end blocks. The mechanical properties of the gels were significantly altered at the cloud point and liquid-like dispersions were formed. These could be reversibly transformed into hydrogels by cooling. This thesis demonstrates that high molar mass PMDEGA is an easily accessible, presumably also biocompatible and at ambient temperature well water-soluble, non-ionic thermo-responsive polymer. PMDEGA can be easily molecularly engineered via the RAFT method, implementing defined end-groups, and producing different, also complex, architectures, such as amphiphilic triblock and star block copolymers, having an analogous structure to associative telechelics. With appropriate design, such amphiphilic copolymers give way to efficient, “smart” viscosifiers and gelators displaying tunable gelling and mechanical properties.
Leaf senescence is an active process required for plant survival, and it is flexibly controlled, allowing plant adaptation to environmental conditions. Although senescence is largely an age-dependent process, it can be triggered by environmental signals and stresses. Leaf senescence coordinates the breakdown and turnover of many cellular components, allowing a massive remobilization and recycling of nutrients from senescing tissues to other organs (e.g., young leaves, roots, and seeds), thus enhancing the fitness of the plant. Such metabolic coordination requires a tight regulation of gene expression. One important mechanism for the regulation of gene expression is at the transcriptional level via transcription factors (TFs). The NAC TF family (NAM, ATAF, CUC) includes various members that show elevated expression during senescence, including ORE1 (ANAC092/AtNAC2) among others. ORE1 was first reported in a screen for mutants with delayed senescence (oresara1, 2, 3, and 11). It was named after the Korean word “oresara,” meaning “long-living,” and abbreviated to ORE1, 2, 3, and 11, respectively. Although the pivotal role of ORE1 in controlling leaf senescence has recently been demonstrated, the underlying molecular mechanisms and the pathways it regulates are still poorly understood. To unravel the signaling cascade through which ORE1 exerts its function, we analyzed particular features of regulatory pathways up-stream and down-stream of ORE1. We identified characteristic spatial and temporal expression patterns of ORE1 that are conserved in Arabidopsis thaliana and Nicotiana tabacum and that link ORE1 expression to senescence as well as to salt stress. We proved that ORE1 positively regulates natural and dark-induced senescence. Molecular characterization of the ORE1 promoter in silico and experimentally suggested a role of the 5’UTR in mediating ORE1 expression. ORE1 is a putative substrate of a calcium-dependent protein kinase named CKOR (unpublished data). Promising data revealed a positive regulation of putative ORE1 targets by CKOR, suggesting the phosphorylation of ORE1 as a requirement for its regulation. Additionally, as part of the ORE1 up-stream regulatory pathway, we identified the NAC TF ATAF1 which was able to transactivate the ORE1 promoter in vivo. Expression studies using chemically inducible ORE1 overexpression lines and transactivation assays employing leaf mesophyll cell protoplasts provided information on target genes whose expression was rapidly induced upon ORE1 induction. First, a set of target genes was established and referred to as early responding in the ORE1 regulatory network. The consensus binding site (BS) of ORE1 was characterized. Analysis of some putative targets revealed the presence of ORE1 BSs in their promoters and the in vitro and in vivo binding of ORE1 to their promoters. Among these putative target genes, BIFUNCTIONAL NUCLEASE I (BFN1) and VND-Interacting2 (VNI2) were further characterized. The expression of BFN1 was found to be dependent on the presence of ORE1. Our results provide convincing data which support a role for BFN1 as a direct target of ORE1. Characterization of VNI2 in age-dependent and stress-induced senescence revealed ORE1 as a key up-stream regulator since it can bind and activate VNI2 expression in vivo and in vitro. Furthermore, VNI2 was able to promote or delay senescence depending on the presence of an activation domain located in its C-terminal region. The plasticity of this gene might include alternative splicing (AS) to regulate its function in different organs and at different developmental stages, particularly during senescence. A model is proposed on the molecular mechanism governing the dual role of VNI2 during senescence.
The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.
This thesis contains several theoretical studies on optomechanical systems, i.e. physical devices where mechanical degrees of freedom are coupled with optical cavity modes. This optomechanical interaction, mediated by radiation pressure, can be exploited for cooling and controlling mechanical resonators in a quantum regime. The goal of this thesis is to propose several new ideas for preparing meso- scopic mechanical systems (of the order of 10^15 atoms) into highly non-classical states. In particular we have shown new methods for preparing optomechani-cal pure states, squeezed states and entangled states. At the same time, proce-dures for experimentally detecting these quantum effects have been proposed. In particular, a quantitative measure of non classicality has been defined in terms of the negativity of phase space quasi-distributions. An operational al- gorithm for experimentally estimating the non-classicality of quantum states has been proposed and successfully applied in a quantum optics experiment. The research has been performed with relatively advanced mathematical tools related to differential equations with periodic coefficients, classical and quantum Bochner’s theorems and semidefinite programming. Nevertheless the physics of the problems and the experimental feasibility of the results have been the main priorities.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
The constantly growing capacity of reconfigurable devices allows simultaneous execution of complex applications on those devices. The mere diversity of applications deems it impossible to design an interconnection network matching the requirements of every possible application perfectly, leading to suboptimal performance in many cases. However, the architecture of the interconnection network is not the only aspect affecting performance of communication. The resource manager places applications on the device and therefore influences latency between communicating partners and overall network load. Communication protocols affect performance by introducing data and processing overhead putting higher load on the network and increasing resource demand. Approaching communication holistically not only considers the architecture of the interconnect, but communication-aware resource management, communication protocols and resource usage just as well. Incorporation of different parts of a reconfigurable system during design- and runtime and optimizing them with respect to communication demand results in more resource efficient communication. Extensive evaluation shows enhanced performance and flexibility, if communication on reconfigurable devices is regarded in a holistic fashion.
Sustainable management of semi-arid African savannas under environmental and political change
(2012)
Drylands cover about 40% of the earth’s land surface and provide the basis for the livelihoods of 38% of the global human population. Worldwide, these ecosystems are prone to heavy degradation. Increasing levels of dryland degradation result a strong decline of ecosystem services. In addition, in highly variable semi-arid environments changing future environmental conditions will potentially have severe consequences for productivity and ecosystem dynamics. Hence, global efforts have to be made to understand the particular causes and consequences of dryland degradation and to promote sustainable management options for semi-arid and arid ecosystems in a changing world. Here I particularly address the problem of semi-arid savanna degradation, which mostly occurs in form of woody plant encroachment. At this, I aim at finding viable sustainable management strategies and improving the general understanding of semi-arid savanna vegetation dynamics under conditions of extensive livestock production. Moreover, the influence of external forces, i.e. environmental change and land reform, on the use of savanna vegetation and on the ecosystem response to this land use is assessed. Based on this I identify conditions and strategies that facilitate a sustainable use of semi-arid savanna rangelands in a changing world. I extended an eco-hydrological model to simulate rangeland vegetation dynamics for a typical semi-arid savanna in eastern Namibia. In particular, I identified the response of semi-arid savanna vegetation to different land use strategies (including fire management) also with regard to different predicted precipitation, temperature and CO2 regimes. Not only environmental but also economic and political constraints like e.g. land reform programmes are shaping rangeland management strategies. Hence, I aimed at understanding the effects of the ongoing process of land reform in southern Africa on land use and the semi-arid savanna vegetation. Therefore, I developed and implemented an agent-based ecological-economic modelling tool for interactive role plays with land users. This tool was applied in an interdisciplinary empirical study to identify general patterns of management decisions and the between-farm cooperation of land reform beneficiaries in eastern Namibia. The eco-hydrological simulations revealed that the future dynamics of semi-arid savanna vegetation strongly depend on the respective climate change scenario. In particular, I found that the capacity of the system to sustain domestic livestock production will strongly depend on changes in the amount and temporal distribution of precipitation. In addition, my simulations revealed that shrub encroachment will become less likely under future climatic conditions although positive effects of CO2 on woody plant growth and transpiration have been considered. While earlier studies predicted a further increase in shrub encroachment due to increased levels of atmospheric CO2, my contrary finding is based on the negative impacts of temperature increase on the drought sensitive seedling germination and establishment of woody plant species. Further simulation experiments revealed that prescribed fires are an efficient tool for semi-arid rangeland management, since they suppress woody plant seedling establishment. The strategies tested have increased the long term productivity of the savanna in terms of livestock production and decreased the risk for shrub encroachment (i.e. savanna degradation). This finding refutes the views promoted by existing studies, which state that fires are of minor importance for the vegetation dynamics of semi-arid and arid savannas. Again, the difference in predictions is related to the bottleneck at the seedling establishment stage of woody plants, which has not been sufficiently considered in earlier studies. The ecological-economic role plays with Namibian land reform beneficiaries showed that the farmers made their decisions with regard to herd size adjustments according to economic but not according to environmental variables. Hence, they do not manage opportunistically by tracking grass biomass availability but rather apply conservative management strategies with low stocking rates. This implies that under the given circumstances the management of these farmers will not per se cause (or further worsen) the problem of savanna degradation and shrub encroachment due to overgrazing. However, as my results indicate that this management strategy is rather based on high financial pressure, it is not an indicator for successful rangeland management. Rather, farmers struggle hard to make any positive revenue from their farming business and the success of the Namibian land reform is currently disputable. The role-plays also revealed that cooperation between farmers is difficult even though obligatory due to the often small farm sizes. I thus propose that cooperation needs to be facilitated to improve the success of land reform beneficiaries.
Background/Purpose
Muscular reflex responses of the lower extremities to sudden gait disturbances are related to postural stability and injury risk. Chronic ankle instability (CAI) has shown to affect activities related to the distal leg muscles while walking. Its effects on proximal muscle activities of the leg, both for the injured- (IN) and uninjured-side (NON), remain unclear. Therefore, the aim was to compare the difference of the motor control strategy in ipsilateral and contralateral proximal joints while unperturbed walking and perturbed walking between individuals with CAI and matched controls.
Materials and methods
In a cross-sectional study, 13 participants with unilateral CAI and 13 controls (CON) walked on a split-belt treadmill with and without random left- and right-sided perturbations. EMG amplitudes of muscles at lower extremities were analyzed 200 ms after perturbations, 200 ms before, and 100 ms after (Post100) heel contact while walking. Onset latencies were analyzed at heel contacts and after perturbations. Statistical significance was set at alpha≤0.05 and 95% confidence intervals were applied to determine group differences. Cohen’s d effect sizes were calculated to evaluate the extent of differences.
Results
Participants with CAI showed increased EMG amplitudes for NON-rectus abdominus at Post100 and shorter latencies for IN-gluteus maximus after heel contact compared to CON (p<0.05). Overall, leg muscles (rectus femoris, biceps femoris, and gluteus medius) activated earlier and less bilaterally (d = 0.30–0.88) and trunk muscles (bilateral rectus abdominus and NON-erector spinae) activated earlier and more for the CAI group than CON group (d = 0.33–1.09).
Conclusion
Unilateral CAI alters the pattern of the motor control strategy around proximal joints bilaterally. Neuromuscular training for the muscles, which alters motor control strategy because of CAI, could be taken into consideration when planning rehabilitation for CAI.
All's well that ends well
(2012)
The transition from cell proliferation to cell expansion is critical for determining leaf size. Andriankaja et al. (2012) demonstrate that in leaves of dicotyledonous plants, a basal proliferation zone is maintained for several days before abruptly disappearing, and that chloroplast differentiation is required to trigger the onset of cell expansion.
The present work is devoted to establishing of a new generation of self-healing anti-corrosion coatings for protection of metals. The concept of self-healing anticorrosion coatings is based on the combination of the passive part, represented by the matrix of conventional coating, and the active part, represented by micron-sized capsules loaded with corrosion inhibitor. Polymers were chosen as the class of compounds most suitable for the capsule preparation. The morphology of capsules made of crosslinked polymers, however, was found to be dependent on the nature of the encapsulated liquid. Therefore, a systematic analysis of the morphology of capsules consisting of a crosslinked polymer and a solvent was performed. Three classes of polymers such as polyurethane, polyurea and polyamide were chosen. Capsules made of these polymers and eight solvents of different polarity were synthesized via interfacial polymerization. It was shown that the morphology of the resulting capsules is specific for every polymer-solvent pair. Formation of capsules with three general types of morphology, such as core-shell, compact and multicompartment, was demonstrated by means of Scanning Electron Microscopy. Compact morphology was assumed to be a result of the specific polymer-solvent interactions and be analogues to the process of swelling. In order to verify the hypothesis, pure polyurethane, polyurea and polyamide were synthesized; their swelling behavior in the solvents used as the encapsulated material was investigated. It was shown that the swelling behavior of the polymers in most cases correlates with the capsules morphology. Different morphologies (compact, core-shell and multicompartment) were therefore attributed to the specific polymer-solvent interactions and discussed in terms of “good” and “poor” solvent. Capsules with core-shell morphology are formed when the encapsulated liquid is a “poor” solvent for the chosen polymer while compact morphologies are formed when the solvent is “good”. Multicompartment morphology is explained by the formation of infinite networks or gelation of crosslinked polymers. If gelation occurs after the phase separation in the system is achieved, core-shell morphology is present. If gelation of the polymer occurs far before crosslinking is accomplished, further condensation of the polymer due to the crosslinking may lead to the formation of porous or multicompartment morphologies. It was concluded that in general, the morphology of capsules consisting of certain polymer-solvent pairs can be predicted on the basis of polymer-solvent behavior. In some cases, the swelling behavior and morphology may not match. The reasons for that are discussed in detail in the thesis. The discussed approach is only capable of predicting capsule morphology for certain polymer-solvent pairs. In practice, the design of the capsules assumes the trial of a great number of polymer-solvent combinations; more complex systems consisting of three, four or even more components are often used. Evaluation of the swelling behavior of each component pair of such systems becomes unreasonable. Therefore, exploitation of the solubility parameter approach was found to be more useful. The latter allows consideration of the properties of each single component instead of the pair of components. In such a manner, the Hansen Solubility Parameter (HSP) approach was used for further analysis. Solubility spheres were constructed for polyurethane, polyurea and polyamide. For this a three-dimensional graph is plotted with dispersion, polar and hydrogen bonding components of solubility parameter, obtained from literature, as the orthogonal axes. The HSP of the solvents are used as the coordinates for the points on the HSP graph. Then a sphere with a certain radius is located on a graph, and the “good” solvents would be located inside the sphere, while the “poor” ones are located outside. Both the location of the sphere center and the sphere radius should be fitted according to the information on polymer swelling behavior in a number of solvents. According to the existing correlation between the capsule morphology and swelling behavior of polymers, the solvents located inside the solubility sphere of a polymer give capsules with compact morphologies. The solvents located outside the solubility sphere of the solvent give either core-shell or multicompartment capsules in combination with the chosen polymer. Once the solubility sphere of a polymer is found, the solubility/swelling behavior is approximated to all possible substances. HSP theory allows therefore prediction of polymer solubility/swelling behavior and consequently the capsule morphology for any given substance with known HSP parameters on the basis of limited data. The latter makes the theory so attractive for application in chemistry and technology, since the choice of the system components is usually performed on the basis of a large number of different parameters that should mutually match. Even slight change of the technology sometimes leads to the necessity to find the analogue of this or that solvent in a sense of solvency but carrying different chemistry. Usage of the HSP approach in this case is indispensable. In the second part of the work examples of the HSP application for the fabrication of capsules with on-demand-morphology are presented. Capsules with compact or core-shell morphology containing corrosion inhibitors were synthesized. Thus, alkoxysilanes possessing long hydrophobic tail, combining passivating and water-repelling properties, were encapsulated in polyurethane shell. The mechanism of action of the active material required core-shell morphology of the capsules. The new hybrid corrosion inhibitor, cerium diethylhexyl phosphate, was encapsulated in polyamide shells in order to facilitate the dispersion of the substance and improve its adhesion to the coating matrix. The encapsulation of commercially available antifouling agents in polyurethane shells was carried out in order to control its release behavior and colloidal stability. Capsules with compact morphology made of polyurea containing the liquid corrosion inhibitor 2-methyl benzothiazole were synthesized in order to improve the colloidal stability of the substance. Capsules with compact morphology allow slower release of the liquid encapsulated material compared to the core-shell ones. If the “in-situ” encapsulation is not possible due to the reaction of the oil-soluble monomer with the encapsulated material, a solution was proposed: loading of the capsules should be performed after monomer deactivation due to the accomplishment of the polymerization reaction. Capsules of desired morphologies should be preformed followed by the loading step. In this way, compact polyurea capsules containing the highly effective but chemically active corrosion inhibitors 8-hydroxyquinoline and benzotriazole were fabricated. All the resulting capsules were successfully introduced into model coatings. The efficiency of the resulting “smart” self-healing anticorrosion coatings on steel and aluminium alloy of the AA-2024 series was evaluated using characterization techniques such as Scanning Vibrating Electron Spectroscopy, Electrochemical Impedance Spectroscopy and salt-spray chamber tests.
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
In industrialized economies such as the European countries unemployment rates are very responsive to the business cycle and significant shares stay unemployed for more than one year. To fight cyclical and long-term unemployment countries spend significant shares of their budget on Active Labor Market Policies (ALMP). To improve the allocation and design of ALMP it is essential for policy makers to have reliable evidence on the effectiveness of such programs available. Although the number of studies has been increased during the last decades, policy makers still lack evidence on innovative programs and for specific subgroups of the labor market. Using Germany as a case study, the dissertation aims at contributing in this way by providing new evidence on start-up subsidies, marginal employment and programs for youth unemployed. The idea behind start-up subsidies is to encourage unemployed individuals to exit unemployment by starting their own business. Those programs have compared to traditional programs of ALMP the advantage that not only the participant escapes unemployment but also might generate additional jobs for other individuals. Considering two distinct start-up subsidy programs, the dissertation adds three substantial aspects to the literature: First, the programs are effective in improving the employment and income situation of participants compared to non-participants in the long-run. Second, the analysis on effect heterogeneity reveals that the programs are particularly effective for disadvantaged groups in the labor market like low educated or low qualified individuals, and in regions with unfavorable economic conditions. Third, the analysis considers the effectiveness of start-up programs for women. Due to higher preferences for flexible working hours and limited part-time jobs, unemployed women often face more difficulties to integrate in dependent employment. It can be shown that start-up subsidy programs are very promising as unemployed women become self-employed which gives them more flexibility to reconcile work and family. Overall, the results suggest that the promotion of self-employment among the unemployed is a sensible strategy to fight unemployment by abolishing labor market barriers for disadvantaged groups and sustainably integrating those into the labor market. The next chapter of the dissertation considers the impact of marginal employment on labor market outcomes of the unemployed. Unemployed individuals in Germany are allowed to earn additional income during unemployment without suffering a reduction in their unemployment benefits. Those additional earnings are usually earned by taking up so-called marginal employment that is employment below a certain income level subject to reduced payroll taxes (also known as “mini-job”). The dissertation provides an empirical evaluation of the impact of marginal employment on unemployment duration and subsequent job quality. The results suggest that being marginal employed during unemployment has no significant effect on unemployment duration but extends employment duration. Moreover, it can be shown that taking up marginal employment is particularly effective for long-term unemployed, leading to higher job-finding probabilities and stronger job stability. It seems that mini-jobs can be an effective instrument to help long-term unemployed individuals to find (stable) jobs which is particularly interesting given the persistently high shares of long-term unemployed in European countries. Finally, the dissertation provides an empirical evaluation of the effectiveness of ALMP programs to improve labor market prospects of unemployed youth. Youth are generally considered a population at risk as they have lower search skills and little work experience compared to adults. This results in above-average turnover rates between jobs and unemployment for youth which is particularly sensitive to economic fluctuations. Therefore, countries spend significant resources on ALMP programs to fight youth unemployment. However, so far only little is known about the effectiveness of ALMP for unemployed youth and with respect to Germany no comprehensive quantitative analysis exists at all. Considering seven different ALMP programs, the results show an overall positive picture with respect to post-treatment employment probabilities for all measures under scrutiny except for job creation schemes. With respect to effect heterogeneity, it can be shown that almost all programs particularly improve the labor market prospects of youths with high levels of pretreatment schooling. Furthermore, youths who are assigned to the most successful employment measures have much better characteristics in terms of their pre-treatment employment chances compared to non-participants. Therefore, the program assignment process seems to favor individuals for whom the measures are most beneficial, indicating a lack of ALMP alternatives that could benefit low-educated youths.
Growing populations, continued economic development, and limited natural resources are critical factors affecting sustainable development. These factors are particularly pertinent in developing countries in which large parts of the population live at a subsistence level and options for sustainable development are limited. Therefore, addressing sustainable land use strategies in such contexts requires that decision makers have access to evidence-based impact assessment tools that can help in policy design and implementation. Ex-ante impact assessment is an emerging field poised at the science-policy interface and is used to assess the potential impacts of policy while also exploring trade-offs between economic, social and environmental sustainability targets. The objective of this study was to operationalise the impact assessment of land use scenarios in the context of developing countries that are characterised by limited data availability and quality. The Framework for Participatory Impact Assessment (FoPIA) was selected for this study because it allows for the integration of various sustainability dimensions, the handling of complexity, and the incorporation of local stakeholder perceptions. FoPIA, which was originally developed for the European context, was adapted to the conditions of developing countries, and its implementation was demonstrated in five selected case studies. In each case study, different land use options were assessed, including (i) alternative spatial planning policies aimed at the controlled expansion of rural-urban development in the Yogyakarta region (Indonesia), (ii) the expansion of soil and water conservation measures in the Oum Zessar watershed (Tunisia), (iii) the use of land conversion and the afforestation of agricultural areas to reduce soil erosion in Guyuan district (China), (iv) agricultural intensification and the potential for organic agriculture in Bijapur district (India), and (v) land division and privatisation in Narok district (Kenya). The FoPIA method was effectively adapted by dividing the assessment into three conceptual steps: (i) scenario development; (ii) specification of the sustainability context; and (iii) scenario impact assessment. A new methodological approach was developed for communicating alternative land use scenarios to local stakeholders and experts and for identifying recommendations for future land use strategies. Stakeholder and expert knowledge was used as the main sources of information for the impact assessment and was complemented by available quantitative data. Based on the findings from the five case studies, FoPIA was found to be suitable for implementing the impact assessment at case study level while ensuring a high level of transparency. FoPIA supports the identification of causal relationships underlying regional land use problems, facilitates communication among stakeholders and illustrates the effects of alternative decision options with respect to all three dimensions of sustainable development. Overall, FoPIA is an appropriate tool for performing preliminary assessments but cannot replace a comprehensive quantitative impact assessment, and FoPIA should, whenever possible, be accompanied by evidence from monitoring data or analytical tools. When using FoPIA for a policy oriented impact assessment, it is recommended that the process should follow an integrated, complementary approach that combines quantitative models, scenario techniques, and participatory methods.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
Cell-level kinetic models for therapeutically relevant processes increasingly benefit the early stages of drug development. Later stages of the drug development processes, however, rely on pharmacokinetic compartment models while cell-level dynamics are typically neglected. We here present a systematic approach to integrate cell-level kinetic models and pharmacokinetic compartment models. Incorporating target dynamics into pharmacokinetic models is especially useful for the development of therapeutic antibodies because their effect and pharmacokinetics are inherently interdependent. The approach is illustrated by analysing the F(ab)-mediated inhibitory effect of therapeutic antibodies targeting the epidermal growth factor receptor. We build a multi-level model for anti-EGFR antibodies by combining a systems biology model with in vitro determined parameters and a pharmacokinetic model based on in vivo pharmacokinetic data. Using this model, we investigated in silico the impact of biochemical properties of anti-EGFR antibodies on their F(ab)-mediated inhibitory effect. The multi-level model suggests that the F(ab)-mediated inhibitory effect saturates with increasing drug-receptor affinity, thereby limiting the impact of increasing antibody affinity on improving the effect. This indicates that observed differences in the therapeutic effects of high affinity antibodies in the market and in clinical development may result mainly from Fc-mediated indirect mechanisms such as antibody-dependent cell cytotoxicity.
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
This study is based on an editorial report, which was presented at the 2009 working conference »Alexander von Humboldt and the Hemisphere« at Vanderbilt University (Nashville, TN). It demonstrates the textual genesis of Humboldt’s writings on Cuba through examples, which were obtained from a detailed text comparison of the three existing »original« versions of Humboldt’s Essai politique sur l’île de Cuba. The collation was part of a larger strategy to regain philological ground for the »Humboldt in English« (HiE) project. Since 2007 and funded with grants from the National Endowment for the Humanities, the Alexander von Humboldt-Foundation, and the Gerda Henkel Foundation, the US-German research team behind HiE has been working on new and unabridged translations and critical editions of three of Humboldt’s most significant texts from his American oeuvre.1 The following observations will outline the most important results of this collation effort as a complementary contribution to the recent release of the HiE project’s first volume, The Political Essay on the Island of Cuba (Chicago University Press 2011), edited by Vera M. Kutzinski and Ottmar Ette.
We study maximal subsemigroups of the monoid T(X) of all full transformations on the set X = N of natural numbers containing a given subsemigroup W of T(X), where each element of a given set U is a generator of T(X) modulo W. This note continues the study of maximal subsemigroups of the monoid of all full transformations on an infinite set.
We analyze a general class of difference operators containing a multi-well potential and a small parameter. We decouple the wells by introducing certain Dirichlet operators on regions containing only one potential well, and we treat the eigenvalue problem as a small perturbation of these comparison problems. We describe tunneling by a certain interaction matrix similar to the analysis for the Schrödinger operator, and estimate the remainder, which is exponentially small and roughly quadratic compared with the interaction matrix.
In the limit we analyze the generators of families of reversible jump processes in the n-dimensional space associated with a class of symmetric non-local Dirichlet forms and show exponential decay of the eigenfunctions. The exponential rate function is a Finsler distance, given as solution of certain eikonal equation. Fine results are sensitive to the rate functions being twice differentiable or just Lipschitz. Our estimates are similar to the semiclassical Agmon estimates for differential operators of second order. They generalize and strengthen previous results on the lattice.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Much previous experimental research on morphological processing has focused on surface and meaning-level properties of morphologically complex words, without paying much attention to the morphological differences between inflectional and derivational processes. Realization-based theories of morphology, for example, assume specific morpholexical representations for derived words that distinguish them from the products of inflectional or paradigmatic processes. The present study reports results from a series of masked priming experiments investigating the processing of inflectional and derivational phenomena in native (L1) and non-native (L2) speakers in a non-Indo-European language, Turkish. We specifically compared regular (Aorist) verb inflection with deadjectival nominalization, both of which are highly frequent, productive and transparent in Turkish. The experiments demonstrated different priming patterns for inflection and derivation, specifically within the L2 group. Implications of these findings are discussed both for accounts of L2 morphological processing and for the controversial linguistic distinction between inflection and derivation.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
We say that (weak/strong) time duality holds for continuous time quasi-birth-and-death-processes if, starting from a fixed level, the first hitting time of the next upper level and the first hitting time of the next lower level have the same distribution. We present here a criterion for time duality in the case where transitions from one level to another have to pass through a given single state, the so-called bottleneck property. We also prove that a weaker form of reversibility called balanced under permutation is sufficient for the time duality to hold. We then discuss the general case.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
The distinctness of, and overlap between, pea genotypes held in several Pisum germplasm collections has been used to determine their relatedness and to test previous ideas about the genetic diversity of Pisum. Our characterisation of genetic diversity among 4,538 Pisum accessions held in 7 European Genebanks has identified sources of novel genetic variation, and both reinforces and refines previous interpretations of the overall structure of genetic diversity in Pisum. Molecular marker analysis was based upon the presence/absence of polymorphism of retrotransposon insertions scored by a high-throughput microarray and SSAP approaches. We conclude that the diversity of Pisum constitutes a broad continuum, with graded differentiation into sub-populations which display various degrees of distinctness. The most distinct genetic groups correspond to the named taxa while the cultivars and landraces of Pisum sativum can be divided into two broad types, one of which is strongly enriched for modern cultivars. The addition of germplasm sets from six European Genebanks, chosen to represent high diversity, to a single collection previously studied with these markers resulted in modest additions to the overall diversity observed, suggesting that the great majority of the total genetic diversity collected for the Pisum genus has now been described. Two interesting sources of novel genetic variation have been identified. Finally, we have proposed reference sets of core accessions with a range of sample sizes to represent Pisum diversity for the future study and exploitation by researchers and breeders.
Porous materials (e.g. zeolites, activated carbon, etc.) have found various applications in industry, such as the use as sorbents, catalyst supports and membranes for separation processes. Recently, much attention has been focused on synthesizing porous polymer materials. A vast amount of tailor-made polymeric systems with tunable properties has been investigated. Very often, however, the starting substances for these polymers are of petrochemical origin, and the processes are all in all not sustainable. Moreover, the new polymers have challenged existing characterizing methodologies. These have to be further developed to address the upcoming demands of the novel materials. Some standard techniques for the analysis of porous substances like nitrogen sorption at 77 K do not seem to be sufficient to answer all arising questions about the microstructure of such materials. In this thesis, microporous polymers from an abundant natural resource, betulin, will be presented. Betulin is a large-scale byproduct of the wood industry, and its content in birch bark can reach 30 wt.%. Based on its rigid structure, polymer networks with intrinsic microporosity could be synthesized and characterized. Apart from standard nitrogen and carbon dioxide sorption at 77 K and 273 K, respectively, gas sorption has been examined not only with various gases (hydrogen and argon) but also at various temperatures. Additional techniques such as X-ray scattering and xenon NMR have been utilized to enable insight into the microporous structure of the material. Starting from insoluble polymer networks with promising gas selectivities, soluble polyesters have been synthesized and processed to a cast film. Such materials are feasible for membrane applications in gas separation. Betulin as a starting compound for polyester synthesis has aided to prepare, and for the first time to thoroughly analyse a microporous polyester with respect to its pores and microstructure. It was established that nitrogen adsorption at 87 K can be a better method to solve the microstructure of the material. In addition to that, other betulin-based polymers such as polyurethanes and polyethylene glycol bioconjugates are presented. Altogether, it has been shown that as an abundant natural resource betulin is a suitable and cheap starting compound for some polymers with various potential applications.
Soil conditions under vegetation cover and their spatial and temporal variations from point to catchment scale are crucial for understanding hydrological processes within the vadose zone, for managing irrigation and consequently maximizing yield by precision farming. Soil moisture and soil roughness are the key parameters that characterize the soil status. In order to monitor their spatial and temporal variability on large scales, remote sensing techniques are required. Therefore the determination of soil parameters under vegetation cover was approached in this thesis by means of (multi-angular) polarimetric SAR acquisitions at a longer wavelength (L-band, lambda=23cm). In this thesis, the penetration capabilities of L-band are combined with newly developed (multi-angular) polarimetric decomposition techniques to separate the different scattering contributions, which are occurring in vegetation and on ground. Subsequently the ground components are inverted to estimate the soil characteristics. The novel (multi-angular) polarimetric decomposition techniques for soil parameter retrieval are physically-based, computationally inexpensive and can be solved analytically without any a priori knowledge. Therefore they can be applied without test site calibration directly to agricultural areas. The developed algorithms are validated with fully polarimetric SAR data acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR) for three different study areas in Germany. The achieved results reveal inversion rates up to 99% for the soil moisture and soil roughness retrieval in agricultural areas. However, in forested areas the inversion rate drops significantly for most of the algorithms, because the inversion in forests is invalid for the applied scattering models at L-band. The validation against simultaneously acquired field measurements indicates an estimation accuracy (root mean square error) of 5-10vol.% for the soil moisture (range of in situ values: 1-46vol.%) and of 0.37-0.45cm for the soil roughness (range of in situ values: 0.5-4.0cm) within the catchment. Hence, a continuous monitoring of soil parameters with the obtained precision, excluding frozen and snow covered conditions, is possible. Especially future, fully polarimetric, space-borne, long wavelength SAR missions can profit distinctively from the developed polarimetric decomposition techniques for separation of ground and volume contributions as well as for soil parameter retrieval on large spatial scales.
Modelling of environmental change impacts on water resources and hydrological extremes in Germany
(2012)
Water resources, in terms of quantity and quality, are significantly influenced by environmental changes, especially by climate and land use changes. The main objective of the present study is to project climate change impacts on the seasonal dynamics of water fluxes, spatial changes in water balance components as well as the future flood and low flow conditions in Germany. This study is based on the modeling results of the process-based eco-hydrological model SWIM (Soil and Water Integrated Model) driven by various regional climate scenarios on one hand. On the other hand, it is supported by statistical analysis on long-term trends of observed and simulated time series. In addition, this study evaluates the impacts of potential land use changes on water quality in terms of NO3-N load in selected sub-regions of the Elbe basin. In the context of climate change, the actual evapotransipration is likely to increase in most parts of Germany, while total runoff generation may decrease in south and east regions in the scenario period 2051-2060. Water discharge in all six studied large rivers (Ems, Weser, Saale, Danube, Main and Neckar) would be 8 – 30% lower in summer and autumn compared to the reference period (1961 – 1990), and the strongest decline is expected for the Saale, Danube and Neckar. The 50-year low flow is likely to occur more frequently in western, southern and central Germany after 2061 as suggested by more than 80% of the model runs. The current low flow period (from August to September) may be extended until the late autumn at the end of this century. Higher winter flow is expected in all of these rivers, and the increase is most significant for the Ems (about 18%). No general pattern of changes in flood directions can be concluded according to the results driven by different RCMs, emission scenarios and multi-realizations. An optimal agricultural land use and management are essential for the reduction in nutrient loads and improvement of water quality. In the Weiße Elster and Unstrut sub-basins (Elbe), an increase of 10% in the winter rape area can result in 12-19% more NO3-N load in rivers. In contrast, another energy plant, maize, has a moderate effect on the water environment. Mineral fertilizers have a much stronger effect on the NO3-N load than organic fertilizers. Cover crops, which play an important role in the reduction of nitrate losses from fields, should be maintained on cropland. The uncertainty in estimating future high flows and, in particular, extreme floods remain high due to different RCM structures, emission scenarios and multi-realizations. In contrast, the projection of low flows under warmer climate conditions appears to be more pronounced and consistent. The largest source of uncertainty related to NO3-N modelling originates from the input data on the agricultural management.
Background: Detection of immunogenic proteins remains an important task for life sciences as it nourishes the understanding of pathogenicity, illuminates new potential vaccine candidates and broadens the spectrum of biomarkers applicable in diagnostic tools. Traditionally, immunoscreenings of expression libraries via polyclonal sera on nitrocellulose membranes or screenings of whole proteome lysates in 2-D gel electrophoresis are performed. However, these methods feature some rather inconvenient disadvantages. Screening of expression libraries to expose novel antigens from bacteria often lead to an abundance of false positive signals owing to the high cross reactivity of polyclonal antibodies towards the proteins of the expression host. A method is presented that overcomes many disadvantages of the old procedures.
Results: Four proteins that have previously been described as immunogenic have successfully been assessed immunogenic abilities with our method. One protein with no known immunogenic behaviour before suggested potential immunogenicity. We incorporated a fusion tag prior to our genes of interest and attached the expressed fusion proteins covalently on microarrays. This enhances the specific binding of the proteins compared to nitrocellulose. Thus, it helps to reduce the number of false positives significantly. It enables us to screen for immunogenic proteins in a shorter time, with more samples and statistical reliability. We validated our method by employing several known genes from Campylobacter jejuni NCTC 11168.
Conclusions: The method presented offers a new approach for screening of bacterial expression libraries to illuminate novel proteins with immunogenic features. It could provide a powerful and attractive alternative to existing methods and help to detect and identify vaccine candidates, biomarkers and potential virulence-associated factors with immunogenic behaviour furthering the knowledge of virulence and pathogenicity of studied bacteria.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
Ten polyQ (polyglutamine) diseases constitute a group of hereditary, neurodegenerative, lethal disorders, characterized by neuronal loss and motor and cognitive impairments. The only common molecular feature of polyQ disease-associated proteins is the homopolymeric polyglutamine repeat. The pathological expansion of polyQ tract invariably leads to protein misfolding and aggregation, resulting in formation of the fibrillar intraneuronal deposits (aggregates) of the disease protein. The polyQ-related cellular toxicity is currently attributed to early, small, soluble aggregate species (oligomers), whereas end-stage, fibrillar, insoluble aggregates are considered to be benign. In the complex cellular environment aggregation and toxicity of mutant polyQ proteins can be affected by both the sequences of the corresponding disease protein (factors acting in cis) and the cellular environment (factors acting in trans). Additionally, the nucleus has been suggested to be the primary site of toxicity in the polyQ-based neurodegeneration. In this study, the dynamics and structure of nuclear and cytoplasmic inclusions were examined to determine the intrinsic and extrinsic factors influencing the cellular aggregation of atrophin-1, a protein implicated in the pathology of dentatorubral-pallidoluysian atrophy (DRPLA), a polyQ-based disease with complex clinical features. Dynamic imaging, combined with biochemical and biophysical approaches revealed a large heterogeneity in the dynamics of atrophin-1 within the nuclear inclusions compared with the compact and immobile cytoplasmic aggregates. At least two types of inclusions of polyQ-expanded atrophin-1 with different mobility of the molecular species and ability to exchange with the surrounding monomer pool coexist in the nucleus of the model cell system, neuroblastoma N2a cells. Furthermore, our novel cross-seeding approach which allows for monitoring of the architecture of the aggregate core directly in the cell revealed an evolution of the aggregate core of the polyQ-expanded ATN1 from one composed of the sequences flanking the polyQ domain at early aggregation phases to one dominated by the polyQ stretch in the later aggregation phase. Intriguingly, these changes in the aggregate core architecture of nuclear and cytoplasmic inclusions mirrored the changes in the protein dynamics and physico-chemical properties of the aggregates in the aggregation time course. 2D-gel analyses followed by MALDI-TOF MS (matrix-assisted laser desorption/ionization time of flight mass spectrometry) were used to detect alterations in the interaction partners of the pathological ATN1 variant compared to the non-pathological ATN1. Based on these results, we propose that the observed complexity in the dynamics of the nuclear inclusions provides a molecular explanation for the enhanced cellular toxicity of the nuclear aggregates in polyQ-based neurodegeneration.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
This article deals with Spanish modal adverbs and verbs of cognitive attitude (Capelli 2007) and their epistemic and/or evidential use. The article is based upon the hypothesis that the study of the use of these linguistic devices has to be highly context-sensitive, as it is not always (only) the sentence level that has to be looked at if one wants to find out whether a certain adverb or verb of cognitive attitude is used evidentially or epistemically. In this article, therefore, the context is used to determine which meaning aspects of an element are encoded and which are contributed by the context. The data were retrieved from the daily newspaper El País. Nevertheless, the present study is not a quantitative one, but rather a qualitative study. My corpus analysis indicates that it is not possible to differentiate between the linguistic categories of evidentiality and epistemic modality in every case, although it indeed is possible in the vast majority of cases. In verbs of cognitive attitude, evidentiality and epistemic modality seem to be two interwoven categories, while concerning modal adverbs it is usually possible to separate the categories and to distinguish between the different subtypes of evidentiality such as visual evidence, hearsay and inference.
MDE techniques are more and more used in praxis. However, there is currently a lack of detailed reports about how different MDE techniques are integrated into the development and combined with each other. To learn more about such MDE settings, we performed a descriptive and exploratory field study with SAP, which is a worldwide operating company with around 50.000 employees and builds enterprise software applications. This technical report describes insights we got during this study. For example, we identified that MDE settings are subject to evolution. Finally, this report outlines directions for future research to provide practical advises for the application of MDE settings.
Thermal and quantum fluctuations of the electromagnetic near field of atoms and macroscopic bodies play a key role in quantum electrodynamics (QED), as in the Lamb shift. They lead, e.g., to atomic level shifts, dispersion interactions (Van der Waals-Casimir-Polder interactions), and state broadening (Purcell effect) because the field is subject to boundary conditions. Such effects can be observed with high precision on the mesoscopic scale which can be accessed in micro-electro-mechanical systems (MEMS) and solid-state-based magnetic microtraps for cold atoms (‘atom chips’). A quantum field theory of atoms (molecules) and photons is adapted to nonequilibrium situations. Atoms and photons are described as fully quantized while macroscopic bodies can be included in terms of classical reflection amplitudes, similar to the scattering approach of cavity QED. The formalism is applied to the study of nonequilibrium two-body potentials. We then investigate the impact of the material properties of metals on the electromagnetic surface noise, with applications to atomic trapping in atom-chip setups and quantum computing, and on the magnetic dipole contribution to the Van der Waals-Casimir-Polder potential in and out of thermal equilibrium. In both cases, the particular properties of superconductors are of high interest. Surface-mode contributions, which dominate the near-field fluctuations, are discussed in the context of the (partial) dynamic atomic dressing after a rapid change of a system parameter and in the Casimir interaction between two conducting plates, where nonequilibrium configurations can give rise to repulsion.
We consider the Dirichlet, Neumann and Zaremba problems for harmonic functions in a bounded plane domain with nonsmooth boundary. The boundary curve belongs to one of the following three classes: sectorial curves, logarithmic spirals and spirals of power type. To study the problem we apply a familiar method of Vekua-Muskhelishvili which consists in using a conformal mapping of the unit disk onto the domain to pull back the problem to a boundary problem for harmonic functions in the disk. This latter is reduced in turn to a Toeplitz operator equation on the unit circle with symbol bearing discontinuities of second kind. We develop a constructive invertibility theory for Toeplitz operators and thus derive solvability conditions as well as explicit formulas for solutions.
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
Assuming that liquid iron alloy from the outer core interacts with the solid silicate-rich lower mantle the influence on the core-mantle reflected phase PcP is studied. If the core-mantle boundary is not a sharp discontinuity, this becomes apparent in the waveform and amplitude of PcP. Iron-silicate mixing would lead to regions of partial melting with higher density which in turn reduces the velocity of seismic waves. On the basis of the calculation and interpretation of short-period synthetic seismograms, using the reflectivity and Gauss Beam method, a model space is evaluated for these ultra-low velocity zones (ULVZs). The aim of this thesis is to analyse the behaviour of PcP between 10° and 40° source distance for such models using different velocity and density configurations. Furthermore, the resolution limits of seismic data are discussed. The influence of the assumed layer thickness, dominant source frequency and ULVZ topography are analysed. The Gräfenberg and NORSAR arrays are then used to investigate PcP from deep earthquakes and nuclear explosions. The seismic resolution of an ULVZ is limited both for velocity and density contrasts and layer thicknesses. Even a very thin global core-mantle transition zone (CMTZ), rather than a discrete boundary and also with strong impedance contrasts, seems possible: If no precursor is observable but the PcP_model /PcP_smooth amplitude reduction amounts to more than 10%, a very thin ULVZ of 5 km with a first-order discontinuity may exist. Otherwise, if amplitude reductions of less than 10% are obtained, this could indicate either a moderate, thin ULVZ or a gradient mantle-side CMTZ. Synthetic computations reveal notable amplitude variations as function of the distance and the impedance contrasts. Thereby a primary density effect in the very steep-angle range and a pronounced velocity dependency in the wide-angle region can be predicted. In view of the modelled findings, there is evidence for a 10 to 13.5 km thick ULVZ 600 km south-eastern of Moscow with a NW-SE extension of about 450 km. Here a single specific assumption about the velocity and density anomaly is not possible. This is in agreement with the synthetic results in which several models create similar amplitude-waveform characteristics. For example, a ULVZ model with contrasts of -5% VP , -15% VS and +5% density explain the measured PcP amplitudes. Moreover, below SW Finland and NNW of the Caspian Sea a CMB topography can be assumed. The amplitude measurements indicate a wavelength of 200 km and a height of 1 km topography, previously also shown in the study by Kampfmann and Müller (1989). Better constraints might be provided by a joined analysis of seismological data, mineralogical experiments and geodynamic modelling.
The project of public-reason liberalism faces a basic problem: publicly justified principles are typically too abstract and vague to be directly applied to practical political disputes, whereas applicable specifications of these principles are not uniquely publicly justified. One solution could be a legislative procedure that selects one member from the eligible set of inconclusively justified proposals. Yet if liberal principles are too vague to select sufficiently specific legislative proposals, can they, nevertheless, select specific legislative procedures? Based on the work of Gerald Gaus, this article argues that the only candidate for a conclusively justified decision procedure is a majoritarian or otherwise ‘neutral’ democracy. If the justification of democracy requires an equality baseline in the design of political regimes and if justifications for departure from this baseline are subject to reasonable disagreement, a majoritarian design is justified by default. Gaus’s own preference for super-majoritarian procedures is based on disputable specifications of justified liberal principles. These procedures can only be defended as a sectarian preference if the equality baseline is rejected, but then it is not clear how the set of justifiable political regimes can be restricted to full democracies.
In this thesis, different aspects within the research field of protein spectro- and electro-chemistry on nanostructured materials are addressed. On the one hand, this work is related to the investigation of nanostructured transparent and conductive metal oxides as platform for the immobilization of electroactive enzymes. On the other hand the second part of this work is related to the immobilization of sulfite oxidase on gold nanoparticles modified electrode. Finally direct and mediated spectroelectrochemistry protein with high structure complexity such as the xanthine dehydrogenase from Rhodobacter capsulatus and its high homologues the mouse aldehyde oxidase homolog 1. Stable immobilization and reversible electrochemistry of cytochrome c in a transparent and conductive tin-doped and tin-rich indium oxide film with a well-defined mesoporosity is reported. The transparency and good conductivity, in combination with the large surface area of these materials, allow the incorporation of a high amount of electroactive biomolecules (between 250 and 2500 pmol cm-2) and their electrochemical and spectroscopic investigation. Both, the electrochemical behavior and the immobilization of proteins are influenced by the geometric parameters of the porous material, such as the structure and pore shape, the surface chemistry, as well as the protein size and charge. UV-Vis and resonance Raman spectroscopy, in combination with direct protein voltammetry, are employed for the characterization of cytochrome c immobilized in the mesoporous indium tin oxide and reveal no perturbation of the structural integrity of the redox protein. A long term protein immobilization is reached using these unmodified mesoporous indium oxide based materials, i.e. more than two weeks even at high ionic strength. The potential of this modified material as an amperometric biosensor for the detection of superoxide anions is demonstrated. A sensitivity of about 100 A M-1 m-2, in a linear measuring range of the superoxide concentration between 0.13 and 0.67 μM, is estimated. In addition an electrochemical switchable protein-based optical device is designed with the core part composed of cytochrome c immobilized on a mesoporous indium tin oxide film. A color developing redox sensitive dye is used as switchable component of the system. The cytochrome c-catalyzed oxidation of the dye by hydrogen peroxide is spectroscopically investigated. When the dye is co-immobilized with the protein, its redox state is easily controlled by application of an electrical potential at the supporting material. This enables to electrochemical reset the system to the initial state and repetitive signal generation. The case of negative charged proteins, which does not have a good interaction with the negative charged indium oxide based films, is also explored. The modification of an indium tin oxide film with a positive charged polymer and the employment of a antimony doped tin oxide film were investigated in this work in order to overcome the repulsion induced by similar charges of the protein and electrode. Human sulfite oxidase and its separated heme-containing domain are able to direct exchange electrons with the supporting material. A study of a new approach for sulfite biosensing, based on enhanced direct electron transfer of a human sulfite oxidase immobilized on a gold nanoparticles modified electrode is reported. The spherical gold nanoparticles were prepared via a novel method by reduction of HAuCl4 with branched poly(ethyleneimine) in an ionic liquid resulting in particles of about 10 nm in hydrodynamic diameter. These nanoparticles were covalently attached to a mercaptoundecanoic acid modified Au-electrode and act as platform where human sulfite oxidase is adsorbed. An enhanced interfacial electron transfer and electrocatalysis is therefore achieved. UV-Vis and resonance Raman spectroscopy, in combination with direct protein voltammetry, were employed for the characterization of the system and reveal no perturbation of the structural integrity of the redox protein. The proposed biosensor exhibited a quick steady-state current response, within 2 s and a linear detection range between 0.5 and 5.4 μM with high sensitivity (1.85 nA μM-1). The investigated system provides remarkable advantages, since it works at low applied potential and at very high ionic strength. Therefore these properties could make the proposed system useful in the development of bioelectronic devices and its application in real samples. Finally protein with high structure complexity such as the xanthine dehydrogenase from Rhodobacter capsulatus and the mouse aldehyde oxidase homolog 1 were spectroelectrochemically studied. It could be demonstrated that different cofactors present in the protein structure, like the FAD and the molybdenum cofactor, are able to directly exchange electrons with an electrode and are displayed as a single peak in a square wave voltammogram. Protein mutants bearing a serine substituted to the cysteines, bounding to the most exposed iron sulfur cluster additionally showed direct electron transfer which can be attributable to this cluster. On the other hand a mediated spectroelectrochemical titration of the protein bound FAD cofactor was performed in presence of transparent iron and cobalt complex mediators. The results showed the formation of the stable semiquinone and the fully reduced flavin. Two formal potentials for each single electron exchange step were then determined.
This study follows the debate in comparative public administration research on the role of advisory arrangements in central governments. The aim of this study is to explain the mechanisms by which these actors gain their alleged role in government decision-making. Hence, it analyses advisory arrangements that are proactively involved in executive decision-making and may compete with the permanent bureaucracy by offering policy advice to political executives. The study argues that these advisory arrangements influence government policy-making by "institutional politics", i.e. by shaping the institutional underpinnings to govern or rather the "rules of the executive game" in order to strengthen their own position or that of their clients. The theoretical argument of this study follows the neo-institutionalist turn in organization theory and defines institutional politics as gradual institutionalization processes between institutions and organizational actors. It applies a broader definition of institutions as sets of regulative, normative and cognitive pillars. Following the "power-distributional approach" such gradual institutionalization processes are influenced by structure-oriented characteristics, i.e. the nature of the objects of institutional politics, in particular the freedom of interpretation in their application, as well as the distinct constraints of the institutional context. In addition, institutional politics are influenced by agency-oriented characteristics, i.e. the ambitions of actors to act as "would-be change agents". These two explanatory dimensions result in four ideal-typical mechanisms of institutional politics: layering, displacement, drift, and conversion, which correspond to four ideal-types of would-be change agents. The study examines the ambitions of advisory arrangements in institutional politics in an exploratory manner, the relevance of the institutional context is analyzed via expectation hypotheses on the effects of four institutional context features that are regarded as relevant in the scholarly debate: (1) the party composition of governments, (2) the structuring principles in cabinet, (3) the administrative tradition, and (4) the formal politicization of the ministerial bureaucracy. The study follows a "most similar systems design" and conducts qualitative case studies on the role of advisory arrangements at the center of German and British governments, i.e. the Prime Minister’s Office and the Ministry of Finance, for a longer period (1969/1970-2005). Three time periods are scrutinized per country; the British case studies examine the role of advisory arrangements at the Cabinet Office, the Prime Minister's Office, and the Ministry of Finance under Prime Ministers Heath (1970-74), Thatcher (1979-87) and Blair (1997-2005). The German case studies study the role of advisory arrangements at the Federal Chancellery and the Federal Ministry of Finance during the Brandt government (1969-74), the Kohl government (1982-1987) and the Schröder government (1998-2005). For the empirical analysis, the results of a document analysis and the findings of 75 semi-structured expert interviews have been triangulated. The comparative analysis reveals different patterns of institutional politics. The German advisory arrangements engaged initially in displacement but turned soon towards layering and drift, i.e. after an initial displacement of the pre-existing institutional underpinnings to govern they laid increasingly new elements onto existing ones and took the non-deliberative decision to neglect the adaption of existing rules of the executive game towards changing environmental demands. The British advisory arrangements were mostly involved in displacement and conversion, despite occasional layering, i.e. they displaced the pre-existing institutional underpinnings to govern with new rules of the executive game and transformed and realigned them, sometimes also layering new elements onto pre-existing ones. The structure- and agency-oriented characteristics explain these patterns of institutional politics. First, the study shows that the institutional context limits the institutional politics in Germany and facilitates the institutional politics in the UK. Second, the freedom of interpreting the application of institutional targets is relevant and could be observed via the different ambitions of advisory arrangements across countries and over time, confirming, third, that the interests of such would-be change agents are likewise important to understand the patterns of institutional politics. The study concludes that the role of advisory arrangements in government policy-making rests not only upon their policy-related, party-political or media-advisory role for political executives, but especially upon their activities in institutional politics, resulting in distinct institutional constraints on all actors in government policy-making – including their own role in these processes.
Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI: one group consistently produced significantly more errors of CLI on both tasks than the other (thereby replicating our findings from a bilingual picture naming task). This striking group difference in language control ability can only be explained by differences in cognitive control, not in language proficiency or language mode.
Particles in Saturn’s main rings range in size from dust to even kilometer-sized objects. Their size distribution is thought to be a result of competing accretion and fragmentation processes. While growth is naturally limited in tidal environments, frequent collisions among these objects may contribute to both accretion and fragmentation. As ring particles are primarily made of water ice attractive surface forces like adhesion could significantly influence these processes, finally determining the resulting size distribution. Here, we derive analytic expressions for the specific self-energy Q and related specific break-up energy Q⋆ of aggregates. These expressions can be used for any aggregate type composed of monomeric constituents. We compare these expressions to numerical experiments where we create aggregates of various types including: regular packings like the face-centered cubic (fcc), Ballistic Particle Cluster Aggregates (BPCA), and modified BPCAs including e.g. different constituent size distributions. We show that accounting for attractive surface forces such as adhesion a simple approach is able to: a) generally account for the size dependence of the specific break-up energy for fragmentation to occur reported in the literature, namely the division into “strength” and “gravity” regimes, and b) estimate the maximum aggregate size in a collisional ensemble to be on the order of a few meters, consistent with the maximum aggregate size observed in Saturn’s rings of about 10m.
Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in first- and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect L1 and L2 comprehenders in essentially the same way. Furthermore, these results show that the timing of island effects in L1 compared to L2 sentence comprehension is affected differently by the type of cue (semantic fit versus filled gaps) signaling whether dependency formation is possible at a potential gap site. Even though L1 English speakers showed immediate sensitivity to filled gaps but not to lack of semantic fit, proficient German-speaking learners of English as a L2 showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in L2 processing is based on semantic feature matching rather than being structurally mediated as in L1 comprehension.