Refine
Has Fulltext
- yes (178) (remove)
Year of publication
- 2012 (178) (remove)
Document Type
- Doctoral Thesis (66)
- Postprint (39)
- Article (28)
- Preprint (27)
- Monograph/Edited Volume (12)
- Master's Thesis (2)
- Working Paper (2)
- Habilitation Thesis (1)
- Other (1)
Language
- English (178) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Mathematik (30)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Physik und Astronomie (19)
- Institut für Umweltwissenschaften und Geographie (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (12)
- Humanwissenschaftliche Fakultät (11)
- Institut für Biochemie und Biologie (11)
- Institut für Geowissenschaften (11)
- Institut für Chemie (10)
- Extern (8)
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
In the limit we analyze the generators of families of reversible jump processes in the n-dimensional space associated with a class of symmetric non-local Dirichlet forms and show exponential decay of the eigenfunctions. The exponential rate function is a Finsler distance, given as solution of certain eikonal equation. Fine results are sensitive to the rate functions being twice differentiable or just Lipschitz. Our estimates are similar to the semiclassical Agmon estimates for differential operators of second order. They generalize and strengthen previous results on the lattice.
All's well that ends well
(2012)
The transition from cell proliferation to cell expansion is critical for determining leaf size. Andriankaja et al. (2012) demonstrate that in leaves of dicotyledonous plants, a basal proliferation zone is maintained for several days before abruptly disappearing, and that chloroplast differentiation is required to trigger the onset of cell expansion.
One of the major problems for the implementation of water resources planning and management in arid and semi-arid environments is the scarcity of hydrological data and, consequently, research studies. In this thesis, the hydrology of dryland river systems was analyzed and a semi-distributed hydrological model and a forecasting approach were developed for flow transmission processes in river-systems with a focus on semi-arid conditions. Three different sources of hydrological data (streamflow series, groundwater level series and multi-temporal satellite data) were combined in order to analyze the channel transmission losses of a large reach of the Jaguaribe River in NE Brazil. A perceptual model of this reach was derived suggesting that the application of models, which were developed for sub-humid and temperate regions, may be more suitable for this reach than classical models, which were developed for arid and semi-arid regions. Summarily, it was shown that this river reach is hydraulically connected with groundwater and shifts from being a losing river at the dry and beginning of rainy seasons to become a losing/gaining (mostly losing) river at the middle and end of rainy seasons. A new semi-distributed channel transmission losses model was developed, which was based primarily on the capability of simulation in very different dryland environments and flexible model structures for testing hypotheses on the dominant hydrological processes of rivers. This model was successfully tested in a large reach of the Jaguaribe River in NE Brazil and a small stream in the Walnut Gulch Experimental Watershed in the SW USA. Hypotheses on the dominant processes of the channel transmission losses (different model structures) in the Jaguaribe river were evaluated, showing that both lateral (stream-)aquifer water fluxes and ground-water flow in the underlying alluvium parallel to the river course are necessary to predict streamflow and channel transmission losses, the former process being more relevant than the latter. This procedure not only reduced model structure uncertainties, but also reported modelling failures rejecting model structure hypotheses, namely streamflow without river-aquifer interaction and stream-aquifer flow without groundwater flow parallel to the river course. The application of the model to different dryland environments enabled learning about the model itself from differences in channel reach responses. For example, the parameters related to the unsaturated part of the model, which were active for the small reach in the USA, presented a much greater variation in the sensitivity coefficients than those which drove the saturated part of the model, which were active for the large reach in Brazil. Moreover, a nonparametric approach, which dealt with both deterministic evolution and inherent fluctuations in river discharge data, was developed based on a qualitative dynamical system-based criterion, which involved a learning process about the structure of the time series, instead of a fitting procedure only. This approach, which was based only on the discharge time series itself, was applied to a headwater catchment in Germany, in which runoff are induced by either convective rainfall during the summer or snow melt in the spring. The application showed the following important features: • the differences between runoff measurements were more suitable than the actual runoff measurements when using regression models; • the catchment runoff system shifted from being a possible dynamical system contaminated with noise to a linear random process when the interval time of the discharge time series increased; • and runoff underestimation can be expected for rising limbs and overestimation for falling limbs. This nonparametric approach was compared with a distributed hydrological model designed for real-time flood forecasting, with both presenting similar results on average. Finally, a benchmark for hydrological research using semi-distributed modelling was proposed, based on the aforementioned analysis, modelling and forecasting of flow transmission processes. The aim of this benchmark was not to describe a blue-print for hydrological modelling design, but rather to propose a scientific method to improve hydrological knowledge using semi-distributed hydrological modelling. Following the application of the proposed benchmark to a case study, the actual state of its hydrological knowledge and its predictive uncertainty can be determined, primarily through rejected hypotheses on the dominant hydrological processes and differences in catchment/variables responses.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
Assignments, curriculum framework and background information as the base of developing lessons
(2012)
1. What are the general strengths of the assignments? 2. Structure of the assignment 3. Resources of the assignment 4. Fostering self-expression 5. How could you improve the assignment? 6. Lack of specific examples 7. Not relating the issue to the students 8. Language Problems 9. Infeasibility to adaptation 10. In what ways was the additional information useful ? How could this be improved? 11. Was the framework useful for you and in what way? 12. In what ways did the assignments reflect the steps identified in the framework?
Asymptotic solutions of the Dirichlet problem for the heat equation at a characteristic point
(2012)
The Dirichlet problem for the heat equation in a bounded domain is characteristic, for there are boundary points at which the boundary touches a characteristic hyperplane t = c, c being a constant. It was I.G. Petrovskii (1934) who first found necessary and sufficient conditions on the boundary which guarantee that the solution is continuous up to the characteristic point, provided that the Dirichlet data are continuous. This paper initiated standing interest in studying general boundary value problems for parabolic equations in bounded domains. We contribute to the study by constructing a formal solution of the Dirichlet problem for the heat equation in a neighbourhood of a characteristic boundary point and showing its asymptotic character.
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
In this thesis, different aspects within the research field of protein spectro- and electro-chemistry on nanostructured materials are addressed. On the one hand, this work is related to the investigation of nanostructured transparent and conductive metal oxides as platform for the immobilization of electroactive enzymes. On the other hand the second part of this work is related to the immobilization of sulfite oxidase on gold nanoparticles modified electrode. Finally direct and mediated spectroelectrochemistry protein with high structure complexity such as the xanthine dehydrogenase from Rhodobacter capsulatus and its high homologues the mouse aldehyde oxidase homolog 1. Stable immobilization and reversible electrochemistry of cytochrome c in a transparent and conductive tin-doped and tin-rich indium oxide film with a well-defined mesoporosity is reported. The transparency and good conductivity, in combination with the large surface area of these materials, allow the incorporation of a high amount of electroactive biomolecules (between 250 and 2500 pmol cm-2) and their electrochemical and spectroscopic investigation. Both, the electrochemical behavior and the immobilization of proteins are influenced by the geometric parameters of the porous material, such as the structure and pore shape, the surface chemistry, as well as the protein size and charge. UV-Vis and resonance Raman spectroscopy, in combination with direct protein voltammetry, are employed for the characterization of cytochrome c immobilized in the mesoporous indium tin oxide and reveal no perturbation of the structural integrity of the redox protein. A long term protein immobilization is reached using these unmodified mesoporous indium oxide based materials, i.e. more than two weeks even at high ionic strength. The potential of this modified material as an amperometric biosensor for the detection of superoxide anions is demonstrated. A sensitivity of about 100 A M-1 m-2, in a linear measuring range of the superoxide concentration between 0.13 and 0.67 μM, is estimated. In addition an electrochemical switchable protein-based optical device is designed with the core part composed of cytochrome c immobilized on a mesoporous indium tin oxide film. A color developing redox sensitive dye is used as switchable component of the system. The cytochrome c-catalyzed oxidation of the dye by hydrogen peroxide is spectroscopically investigated. When the dye is co-immobilized with the protein, its redox state is easily controlled by application of an electrical potential at the supporting material. This enables to electrochemical reset the system to the initial state and repetitive signal generation. The case of negative charged proteins, which does not have a good interaction with the negative charged indium oxide based films, is also explored. The modification of an indium tin oxide film with a positive charged polymer and the employment of a antimony doped tin oxide film were investigated in this work in order to overcome the repulsion induced by similar charges of the protein and electrode. Human sulfite oxidase and its separated heme-containing domain are able to direct exchange electrons with the supporting material. A study of a new approach for sulfite biosensing, based on enhanced direct electron transfer of a human sulfite oxidase immobilized on a gold nanoparticles modified electrode is reported. The spherical gold nanoparticles were prepared via a novel method by reduction of HAuCl4 with branched poly(ethyleneimine) in an ionic liquid resulting in particles of about 10 nm in hydrodynamic diameter. These nanoparticles were covalently attached to a mercaptoundecanoic acid modified Au-electrode and act as platform where human sulfite oxidase is adsorbed. An enhanced interfacial electron transfer and electrocatalysis is therefore achieved. UV-Vis and resonance Raman spectroscopy, in combination with direct protein voltammetry, were employed for the characterization of the system and reveal no perturbation of the structural integrity of the redox protein. The proposed biosensor exhibited a quick steady-state current response, within 2 s and a linear detection range between 0.5 and 5.4 μM with high sensitivity (1.85 nA μM-1). The investigated system provides remarkable advantages, since it works at low applied potential and at very high ionic strength. Therefore these properties could make the proposed system useful in the development of bioelectronic devices and its application in real samples. Finally protein with high structure complexity such as the xanthine dehydrogenase from Rhodobacter capsulatus and the mouse aldehyde oxidase homolog 1 were spectroelectrochemically studied. It could be demonstrated that different cofactors present in the protein structure, like the FAD and the molybdenum cofactor, are able to directly exchange electrons with an electrode and are displayed as a single peak in a square wave voltammogram. Protein mutants bearing a serine substituted to the cysteines, bounding to the most exposed iron sulfur cluster additionally showed direct electron transfer which can be attributable to this cluster. On the other hand a mediated spectroelectrochemical titration of the protein bound FAD cofactor was performed in presence of transparent iron and cobalt complex mediators. The results showed the formation of the stable semiquinone and the fully reduced flavin. Two formal potentials for each single electron exchange step were then determined.
Carbohydrate recognition is a ubiquitous principle underlying many fundamental biological processes like fertilization, embryogenesis and viral infections. But how carbohydrate specificity and affinity induce a molecular event is not well understood. One of these examples is bacteriophage P22 that binds and infects three distinct Salmonella enterica (S.) hosts. It recognizes and depolymerizes repetitive carbohydrate structures of O antigen in its host´s outer membrane lipopolysaccharide molecule. This is mediated by tailspikes, mainly β helical appendages on phage P22 short non contractile tail apparatus (podovirus). The O antigen of all three Salmonella enterica hosts is built from tetrasaccharide repeating units consisting of an identical main chain with a distinguished 3,6 dideoxyhexose substituent that is crucial for P22 tailspike recognition: tyvelose in S. Enteritidis, abequose in S. Typhimurium and paratose in S. Paratyphi. In the first study the complexes of P22 tailspike with its host’s O antigen octasaccharide were characterized. S. Paratyphi octasaccharide binds less tightly (ΔΔG≈7 kJ/mol) to the tailspike than the other two hosts. Crystal structure analysis of P22 tailspike co crystallized with S. Paratyphi octasaccharides revealed different interactions than those observed before in tailspike complexes with S. Enteritidis and S. Typhimurium octasaccharides. These different interactions occur due to a structural rearrangement in the S. Paratyphi octasaccharide. It results in an unfavorable glycosidic bond Φ/Ψ angle combination that also had occurred when the S. Paratyphi octasaccharide conformation was analyzed in an aprotic environment. Contributions of individual protein surface contacts to binding affinity were analyzed showing that conserved structural waters mediate specific recognition of all three different Salmonella host O antigens. Although different O antigen structures possess distinct binding behavior on the tailspike surface, all are recognized and infected by phage P22. Hence, in a second study, binding measurements revealed that multivalent O antigen was able to bind with high avidity to P22 tailspike. Dissociation rates of the polymer were three times slower than for an octasaccharide fragment pointing towards high affinity for O antigen polysaccharide. Furthermore, when phage P22 was incubated with lipopolysaccharide aggregates before plating on S. Typhimurium cells, P22 infectivity became significantly reduced. Therefore, in a third study, the function of carbohydrate recognition on the infection process was characterized. It was shown that large S. Typhimurium lipopolysaccharide aggregates triggered DNA release from the phage capsid in vitro. This provides evidence that phage P22 does not use a second receptor on the Salmonella surface for infection. P22 tailspike binding and cleavage activity modulate DNA egress from the phage capsid. DNA release occurred more slowly when the phage possessed mutant tailspikes with less hydrolytic activity and was not induced if lipopolysaccharides contained tailspike shortened O antigen polymer. Furthermore, the onset of DNA release was delayed by tailspikes with reduced binding affinity. The results suggest a model for P22 infection induced by carbohydrate recognition: tailspikes position the phage on Salmonella enterica and their hydrolytic activity forces a central structural protein of the phage assembly, the plug protein, onto the host´s membrane surface. Upon membrane contact, a conformational change has to occur in the assembly to eject DNA and pilot proteins from the phage to establish infection. Earlier studies had investigated DNA ejection in vitro solely for viruses with long non contractile tails (siphovirus) recognizing protein receptors. Podovirus P22 in this work was therefore the first example for a short tailed phage with an LPS recognition organelle that can trigger DNA ejection in vitro. However, O antigen binding and cleaving tailspikes are widely distributed in the phage biosphere, for example in siphovirus 9NA. Crystal structure analysis of 9NA tailspike revealed a complete similar fold to P22 tailspike although they only share 36 % sequence identity. Moreover, 9NA tailspike possesses similar enzyme activity towards S. Typhimurium O antigen within conserved amino acids. These are responsible for a DNA ejection process from siphovirus 9NA triggered by lipopolysaccharide aggregates. 9NA expelled its DNA 30 times faster than podovirus P22 although the associated conformational change is controlled with a similar high activation barrier. The difference in DNA ejection velocity mirrors different tail morphologies and their efficiency to translate a carbohydrate recognition signal into action.
We study boundary value problems for linear elliptic differential operators of order one. The underlying manifold may be noncompact, but the boundary is assumed to be compact. We require a symmetry property of the principal symbol of the operator along the boundary. This is satisfied by Dirac type operators, for instance. We provide a selfcontained introduction to (nonlocal) elliptic boundary conditions, boundary regularity of solutions, and index theory. In particular, we simplify and generalize the traditional theory of elliptic boundary value problems for Dirac type operators. We also prove a related decomposition theorem, a general version of Gromov and Lawson's relative index theorem and a generalization of the cobordism theorem.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
Taking advantage of ATRP and using functionalized initiators, different functionalities were introduced in both α and ω chain-ends of synthetic polymers. These functionalized polymers could then go through modular synthetic pathways such as click cycloaddition (copper-catalyzed or copper-free) or amidation to couple synthetic polymers to other synthetic polymers, biomolecules or silica monoliths. Using this general strategy and designing these co/polymers so that they are thermoresponsive, yet bioinert and biocompatible with adjustable cloud point values (as it is the case in the present thesis), the whole generated system becomes "smart" and potentially applicable in different branches. The applications which were considered in the present thesis were in polymer post-functionalization (in situ functionalization of micellar aggregates with low and high molecular weight molecules), hydrophilic/hydrophobic tuning, chromatography and bioconjugation (enzyme thermoprecipitation and recovery, improvement of enzyme activity). Different α-functionalized co/polymers containing cholesterol moiety, aldehyde, t-Boc protected amine, TMS-protected alkyne and NHS-activated ester were designed and synthesized in this work.
This work investigates diffusion in nonlinear Hamiltonian systems. The diffusion, more precisely subdiffusion, in such systems is induced by the intrinsic chaotic behavior of trajectories and thus is called chaotic diffusion''. Its properties are studied on the example of one- or two-dimensional lattices of harmonic or nonlinear oscillators with nearest neighbor couplings. The fundamental observation is the spreading of energy for localized initial conditions. Methods of quantifying this spreading behavior are presented, including a new quantity called excitation time. This new quantity allows for a more precise analysis of the spreading than traditional methods. Furthermore, the nonlinear diffusion equation is introduced as a phenomenologic description of the spreading process and a number of predictions on the density dependence of the spreading are drawn from this equation. Two mathematical techniques for analyzing nonlinear Hamiltonian systems are introduced. The first one is based on a scaling analysis of the Hamiltonian equations and the results are related to similar scaling properties of the NDE. From this relation, exact spreading predictions are deduced. Secondly, the microscopic dynamics at the edge of spreading states are thoroughly analyzed, which again suggests a scaling behavior that can be related to the NDE. Such a microscopic treatment of chaotically spreading states in nonlinear Hamiltonian systems has not been done before and the results present a new technique of connecting microscopic dynamics with macroscopic descriptions like the nonlinear diffusion equation. All theoretical results are supported by heavy numerical simulations, partly obtained on one of Europe's fastest supercomputers located in Bologna, Italy. In the end, the highly interesting case of harmonic oscillators with random frequencies and nonlinear coupling is studied, which resembles to some extent the famous Discrete Anderson Nonlinear Schroedinger Equation. For this model, a deviation from the widely believed power-law spreading is observed in numerical experiments. Some ideas on a theoretical explanation for this deviation are presented, but a conclusive theory could not be found due to the complicated phase space structure in this case. Nevertheless, it is hoped that the techniques and results presented in this work will help to eventually understand this controversely discussed case as well.
We investigate properties of quantum mechanical systems in the light of quantum information theory. We put an emphasize on systems with infinite-dimensional Hilbert spaces, so-called continuous-variable systems'', which are needed to describe quantum optics beyond the single photon regime and other Bosonic quantum systems. We present methods to obtain a description of such systems from a series of measurements in an efficient manner and demonstrate the performance in realistic situations by means of numerical simulations. We consider both unconditional quantum state tomography, which is applicable to arbitrary systems, and tomography of matrix product states. The latter allows for the tomography of many-body systems because the necessary number of measurements scales merely polynomially with the particle number, compared to an exponential scaling in the generic case. We also present a method to realize such a tomography scheme for a system of ultra-cold atoms in optical lattices. Furthermore, we discuss in detail the possibilities and limitations of using continuous-variable systems for measurement-based quantum computing. We will see that the distinction between Gaussian and non-Gaussian quantum states and measurements plays an crucial role. We also provide an algorithm to solve the large and interesting class of naturally occurring Hamiltonians, namely frustration free ones, efficiently and use this insight to obtain a simple approximation method for slightly frustrated systems. To achieve this goals, we make use of, among various other techniques, the well developed theory of matrix product states, tensor networks, semi-definite programming, and matrix analysis.
We consider orthogonal connections with arbitrary torsion on compact Riemannian manifolds. For the induced Dirac operators, twisted Dirac operators and Dirac operators of Chamseddine-Connes type we compute the spectral action. In addition to the Einstein-Hilbert action and the bosonic part of the Standard Model Lagrangian we find the Holst term from Loop Quantum Gravity, a coupling of the Holst term to the scalar curvature and a prediction for the value of the Barbero-Immirzi parameter.
Agriculture is one of the most important human activities providing food and more agricultural goods for seven billion people around the world and is of special importance in sub-Saharan Africa. The majority of people depends on the agricultural sector for their livelihoods and will suffer from negative climate change impacts on agriculture until the middle and end of the 21st century, even more if weak governments, economic crises or violent conflicts endanger the countries’ food security. The impact of temperature increases and changing precipitation patterns on agricultural vegetation motivated this thesis in the first place. Analyzing the potentials of reducing negative climate change impacts by adapting crop management to changing climate is a second objective of the thesis. As a precondition for simulating climate change impacts on agricultural crops with a global crop model first the timing of sowing in the tropics was improved and validated as this is an important factor determining the length and timing of the crops´ development phases, the occurrence of water stress and final crop yield. Crop yields are projected to decline in most regions which is evident from the results of this thesis, but the uncertainties that exist in climate projections and in the efficiency of adaptation options because of political, economical or institutional obstacles have to be considered. The effect of temperature increases and changing precipitation patterns on crop yields can be analyzed separately and varies in space across the continent. Southern Africa is clearly the region most susceptible to climate change, especially to precipitation changes. The Sahel north of 13° N and parts of Eastern Africa with short growing seasons below 120 days and limited wet season precipitation of less than 500 mm are also vulnerable to precipitation changes while in most other part of East and Central Africa, in contrast, the effect of temperature increase on crops overbalances the precipitation effect and is most pronounced in a band stretching from Angola to Ethiopia in the 2060s. The results of this thesis confirm the findings from previous studies on the magnitude of climate change impact on crops in sub-Saharan Africa but beyond that helps to understand the drivers of these changes and the potential of certain management strategies for adaptation in more detail. Crop yield changes depend on the initial growing conditions, on the magnitude of climate change, and on the crop, cropping system and adaptive capacity of African farmers which is only now evident from this comprehensive study for sub-Saharan Africa. Furthermore this study improves the representation of tropical cropping systems in a global crop model and considers the major food crops cultivated in sub-Saharan Africa and climate change impacts throughout the continent.
The clumping of massive star winds is an established paradigm, which is confirmed by multiple lines of evidence and is supported by stellar wind theory. We use the results from time-dependent hydrodynamical models of the instability in the line-driven wind of a massive supergiant star to derive the time-dependent accretion rate on to a compact object in the Bondi-Hoyle-Lyttleton approximation. The strong density and velocity fluctuations in the wind result in strong variability of the synthetic X-ray light curves. Photoionization of inhomogeneous winds is different from the photoinization of smooth winds. The degree of ionization is affected by the wind clumping. The wind clumping must also be taken into account when comparing the observed and model spectra of the photoionized stellar wind.
This paper develops a spatial model to analyze the stability of a market sharing agreement between two firms. We find that the stability of the cartel depends on the relative market size of each firm. Collusion is not attractive for firms with a small home market, but the incentive for collusion increases when the firm’s home market is getting larger relative to the home market of the competitor. The highest stability of a cartel and additionally the highest social welfare is found when regions are symmetric. Further we can show that a monetary transfer can stabilize the market sharing agreement.
Inhalt: Introduction Developments in creating corpora dlexDB, subtitles, and tabloid newspapers Rating corpus emotionality Current study Method Materials Corpora Results Type-token ratio Validity: Effects of task difficulty Emotionality of a corpus Validity: Effects of emotionality Discussion Outlook References
We consider the Dirichlet, Neumann and Zaremba problems for harmonic functions in a bounded plane domain with nonsmooth boundary. The boundary curve belongs to one of the following three classes: sectorial curves, logarithmic spirals and spirals of power type. To study the problem we apply a familiar method of Vekua-Muskhelishvili which consists in using a conformal mapping of the unit disk onto the domain to pull back the problem to a boundary problem for harmonic functions in the disk. This latter is reduced in turn to a Toeplitz operator equation on the unit circle with symbol bearing discontinuities of second kind. We develop a constructive invertibility theory for Toeplitz operators and thus derive solvability conditions as well as explicit formulas for solutions.
Connecting the new world
(2012)
This article explores the link between the profound technological transformations of the nineteenth century and the life and work of the Prussian scholar Alexander von Humboldt (1769-1859). It analyses how Humboldt sought to appropriate the revolutionary new communication and transportation technologies of the time in order to integrate the American continent into global networks of commercial, intellectual and material exchange. Recent scholarship on Humboldt’s expedition to the New World (1799-1804) has claimed that his descriptions of tropical landscapes opened up South America to a range of ‘transformative interventions’ (Pratt) by European capitalists and investors. These studies, however, have not analysed the motivations underlying Humboldt’s support for such intrusions into nature. Furthermore, they have not explored the role that such projects played in shaping Humboldt’s understanding of the forces behind the progress of societies. To comprehend Humboldt’s approval for human interventions in America’s natural world, this study first explores the role that eighteenth-century theories of progress and the notion of geographical determinism played in shaping his conception of civilisational development. It will look at concrete examples of transformative interventions in the American hemisphere that were actively proposed by Humboldt and intended to overcome natural obstacles to human interaction. These were the use of steamships, electric telegraphy, railroads and large-scale canals that together enabled global trade and communication to occur at an unprecedented pace. All these contemporary innovations will be linked to the four motifs of nets, mobility, progress and acceleration, which were driving forces behind the ‘transformation of the world’ that took place in the course of the nineteenth century.
The size of plant organs, such as leaves and flowers, is determined by an interaction of genotype and environmental influences. Organ growth occurs through the two successive processes of cell proliferation followed by cell expansion. A number of genes influencing either or both of these processes and thus contributing to the control of final organ size have been identified in the last decade. Although the overall picture of the genetic regulation of organ size remains fragmentary, two transcription factor/microRNA-based genetic pathways are emerging in the control of cell proliferation. However, despite this progress, fundamental questions remain unanswered, such as the problem of how the size of a growing organ could be monitored to determine the appropriate time for terminating growth. While genetic analysis will undoubtedly continue to advance our knowledge about size control in plants, a deeper understanding of this and other basic questions will require including advanced live-imaging and mathematical modeling, as impressively demonstrated by some recent examples. This should ultimately allow the comparison of the mechanisms underlying size control in plants and in animals to extract common principles and lineage-specific solutions.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Governments at central and sub-national levels are increasingly pursuing participatory mechanisms in a bid to improve governance and service delivery. This has been largely in the context of decentralization reforms in which central governments transfer (share) political, administrative, fiscal and economic powers and functions to sub-national units. Despite the great international support and advocacy for participatory governance where citizen’s voice plays a key role in decision making of decentralized service delivery, there is a notable dearth of empirical evidence as to the effect of such participation. This is the question this study sought to answer based on a case study of direct citizen participation in Local Authorities (LAs) in Kenya. This is as formally provided for by the Local Authority Service Delivery Action Plan (LASDAP) framework that was established to ensure citizens play a central role in planning and budgeting, implementation and monitoring of locally identified services towards improving livelihoods and reducing poverty. Influence of participation was assessed in terms of how it affected five key determinants of effective service delivery namely: efficient allocation of resources; equity in service delivery; accountability and reduction of corruption; quality of services; and, cost recovery. It finds that the participation of citizens is minimal and the resulting influence on the decentralized service delivery negligible. It concludes that despite the dismal performance of citizen participation, LASDAP has played a key role towards institutionalizing citizen participation that future structures will build on. It recommends that an effective framework of citizen participation should be one that is not directly linked to politicians; one that is founded on a legal framework and where citizens have a legal recourse opportunity; and, one that obliges LA officials both to implement what citizen’s proposals which meet the set criteria as well as to account for their actions in the management of public resources.
Deepening Understanding
(2012)
We study the Dirichlet problem in a bounded plane domain for the heat equation with small parameter multiplying the derivative in t. The behaviour of solution at characteristic points of the boundary is of special interest. The behaviour is well understood if a characteristic line is tangent to the boundary with contact degree at least 2. We allow the boundary to not only have contact of degree less than 2 with a characteristic line but also a cuspidal singularity at a characteristic point. We construct an asymptotic solution of the problem near the characteristic point to describe how the boundary layer degenerates.
In this work, the synthesis of biopolymer-based hydrogel networks with defined architecture is presented. In order to obtain materials with defined properties, the chemoselective copper-catalyzed azide-alkyne cycloaddition (or Click Chemistry) was used for the synthesis of gelatin-based hydrogels. Alkyne-functionalized gelatin was reacted with four different diazide crosslinkers above its sol-gel transition to suppress the formation of triple helices. By variation of the crosslinking density and the crosslinker flexibility, the swelling (Q: 150-470 vol.-%;) and the Young’s and shear moduli (E: 50 kPa - 635 kPa, G’: 0.1 kPa - 16 kPa) could be tuned in the kPa range. In order to understand the network structure, a method based on the labelling of free functional groups within the hydrogel was developed. Gelatin-based hydrogels were incubated with alkyne-functionalized fluorescein to detect the free azide groups, resulting from the formation of dangling chains. Gelatin hydrogels were also incubated with azido-functionalized fluorescein to check the presence of alkyne groups available for the attachment of bioactive molecules. By using confocal laser scanning microscopy and fluorescence spectroscopy, the amount of crosslinking, grafting and free alkyne groups could be determined. Dangling chains were observed in samples prepared by using an excess of crosslinker and also when using equimolar amounts of alkyne:azide. In the latter case the amount of dangling chains was affected by the crosslinker structure. Specifically, 0.1% of dangling chains were found using 4,4’-diazido-2,2’-stilbene-disulfonic acid as cosslinker, 0.06% with 1,8-diazidooctane, 0.05% with 1,12-diazidododecane and 0.022 % with PEG-diazide. This observation could be explained considering the structure of the crosslinkers. During network formation, the movements of the gelatin chains are restricted due to the formation of covalent netpoints. A further crosslinking will be possible only in the case of crosslinker that are flexible and long enough to reach another chain. The method used to obtain defined gelatin-based hydrogels enabled also the synthesis of hyaluronic acid-based hydrogels with tailorable properties. Alkyne-functionalized hyaluronic acid was crosslinked with three different linkers having two terminal azide functionalities. By variation of the crosslinking density and crosslinker type, hydrogels with elastic moduli in the range of 0.5-3 kPa have been prepared. The variation of the crosslinking density and crosslinker type had furthermore an influence also on the hydrolytic and enzymatic degradation of gelatin-based hydrogels. Hydrogels with a low crosslinker amount experienced a faster decrease in mass loss and elastic modulus compared to hydrogels with higher crosslinker content. Moreover, the structure of the crosslinker had a strong influence on the enzymatic degradation. Hydrogels containing a crosslinker with a rigid structure were much more resistant to enzymatic degradation than hydrogels containing a flexible crosslinker. During hydrolytic degradation, the hydrogel became softer while maintaining the same outer dimensions. These observations are in agreement with a bulk degradation mechanism, while the decrease in size of the hydrogels during enzymatic degradation suggested a surface erosion mechanism. Because of the use of small amount of crosslinker (0.002 mol.% 0.02 mol.%) the networks synthesized can still be defined as biopolymer-based hydrogels. However, they contain a small percentage of synthetic residues. Alternatively, a possible method to obtain biopolymer-based telechelics, which could be used as crosslinkers, was investigated. Gelatin-based fragments with defined molecular weight were obtained by controlled degradation of gelatin with hydroxylamine, due to its specific action on asparaginyl-glycine bonds. The reaction of gelatin with hydroxylamine resulted in fragments with molecular weights of 15, 25, 37, and 50 kDa (determined by SDS-PAGE) independently of the reaction time and conditions. Each of these fragments could be potentially used for the synthesis of hydrogels in which all components are biopolymer-based materials.
Developing Critical Thinking
(2012)
Developing critical thinking
(2012)
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
Much of our knowledge about the solar dynamo is based on sunspot observations. It is thus desirable to extend the set of positional and morphological data of sunspots into the past. Gustav Spörer observed in Germany from Anklam (1861–1873) and Potsdam (1874–1894). He left detailed prints of sunspot groups, which we digitized and processed to mitigate artifacts left in the print by the passage of time. After careful geometrical correction, the sunspot data are now available as synoptic charts for almost 450 solar rotation periods. Individual sunspot positions can thus be precisely determined and spot areas can be accurately measured using morphological image processing techniques. These methods also allow us to determine tilt angles of active regions (Joy’s law) and to assess the complexity of an active region.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration this modification is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
The project of public-reason liberalism faces a basic problem: publicly justified principles are typically too abstract and vague to be directly applied to practical political disputes, whereas applicable specifications of these principles are not uniquely publicly justified. One solution could be a legislative procedure that selects one member from the eligible set of inconclusively justified proposals. Yet if liberal principles are too vague to select sufficiently specific legislative proposals, can they, nevertheless, select specific legislative procedures? Based on the work of Gerald Gaus, this article argues that the only candidate for a conclusively justified decision procedure is a majoritarian or otherwise ‘neutral’ democracy. If the justification of democracy requires an equality baseline in the design of political regimes and if justifications for departure from this baseline are subject to reasonable disagreement, a majoritarian design is justified by default. Gaus’s own preference for super-majoritarian procedures is based on disputable specifications of justified liberal principles. These procedures can only be defended as a sectarian preference if the equality baseline is rejected, but then it is not clear how the set of justifiable political regimes can be restricted to full democracies.
Dynamic regulatory on/off minimization for biological systems under internal temporal perturbations
(2012)
Background: Flux balance analysis (FBA) together with its extension, dynamic FBA, have proven instrumental for analyzing the robustness and dynamics of metabolic networks by employing only the stoichiometry of the included reactions coupled with adequately chosen objective function. In addition, under the assumption of minimization of metabolic adjustment, dynamic FBA has recently been employed to analyze the transition between metabolic states.
Results: Here, we propose a suite of novel methods for analyzing the dynamics of (internally perturbed) metabolic networks and for quantifying their robustness with limited knowledge of kinetic parameters. Following the biochemically meaningful premise that metabolite concentrations exhibit smooth temporal changes, the proposed methods rely on minimizing the significant fluctuations of metabolic profiles to predict the time-resolved metabolic state, characterized by both fluxes and concentrations. By conducting a comparative analysis with a kinetic model of the Calvin-Benson cycle and a model of plant carbohydrate metabolism, we demonstrate that the principle of regulatory on/off minimization coupled with dynamic FBA can accurately predict the changes in metabolic states.
Conclusions: Our methods outperform the existing dynamic FBA-based modeling alternatives, and could help in revealing the mechanisms for maintaining robustness of dynamic processes in metabolic networks over time.
Early acquisition of a second language influences the development of language abilities and cognitive functions. In the present study, we used functional Magnetic Resonance Imaging (fMRI) to investigate the impact of early bilingualism on the organization of the cortical language network during sentence production. Two groups of adult multilinguals, proficient in three languages, were tested on a narrative task; early multilinguals acquired the second language before the age of three years, late multilinguals after the age of nine. All participants learned a third language after nine years of age. Comparison of the two groups revealed substantial differences in language-related brain activity for early as well as late acquired languages. Most importantly, early multilinguals preferentially activated a fronto-striatal network in the left hemisphere, whereas the left posterior superior temporal gyrus (pSTG) was activated to a lesser degree than in late multilinguals. The same brain regions were highlighted in previous studies when a non-target language had to be controlled. Hence the engagement of language control in adult early multilinguals appears to be influenced by the specific learning and acquisition conditions during early childhood. Remarkably, our results reveal that the functional control of early and subsequently later acquired languages is similarly affected, suggesting that language experience has a pervasive influence into adulthood. As such, our findings extend the current understanding of control functions in multilinguals.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
The microscopic origin of ultrafast demagnetization, i.e. the quenching of the magnetization of a ferromagnetic metal on a sub-picosecond timescale after laser excitation, is still only incompletely understood, despite a large body of experimental and theoretical work performed since the discovery of the effect more than 15 years ago. Time- and element-resolved x-ray magnetic circular dichroism measurements can provide insight into the microscopic processes behind ultrafast demagnetization as well as its dependence on materials properties. Using the BESSY II Femtoslicing facility, a storage ring based source of 100 fs short soft x-ray pulses, ultrafast magnetization dynamics of ferromagnetic NiFe and GdTb alloys as well as a Au/Ni layered structure were investigated in laser pump – x-ray probe experiments. After laser excitation, the constituents of Ni50Fe50 and Ni80Fe20 exhibit distinctly different time constants of demagnetization, leading to decoupled dynamics, despite the strong exchange interaction that couples the Ni and Fe sublattices under equilibrium conditions. Furthermore, the time constants of demagnetization for Ni and Fe are different in Ni50Fe50 and Ni80Fe20, and also different from the values for the respective pure elements. These variations are explained by taking the magnetic moments of the Ni and Fe sublattices, which are changed from the pure element values due to alloying, as well as the strength of the intersublattice exchange interaction into account. GdTb exhibits demagnetization in two steps, typical for rare earths. The time constant of the second, slower magnetization decay was previously linked to the strength of spin-lattice coupling in pure Gd and Tb, with the stronger, direct spin-lattice coupling in Tb leading to a faster demagnetization. In GdTb, the demagnetization of Gd follows Tb on all timescales. This is due to the opening of an additional channel for the dissipation of spin angular momentum to the lattice, since Gd magnetic moments in the alloy are coupled via indirect exchange interaction to neighboring Tb magnetic moments, which are in turn strongly coupled to the lattice. Time-resolved measurements of the ultrafast demagnetization of a Ni layer buried under a Au cap layer, thick enough to absorb nearly all of the incident pump laser light, showed a somewhat slower but still sub-picosecond demagnetization of the buried Ni layer in Au/Ni compared to a Ni reference sample. Supported by simulations, I conclude that demagnetization can thus be induced by transport of hot electrons excited in the Au layer into the Ni layer, without the need for direct interaction between photons and spins.
Especially for the last twenty years, the studies of Linguistic Landscapes (LLs) have been gaining the status as an autonomous linguistic discipline. The LL of a (mostly) geographically limited area – which consists of e.g. billboards, posters, shop signs, material for election campaigns, etc. – gives deep insights into the presence or absence of languages in that particular area. Thus, LL not only allows to conclude from the presence of a language to its dominance, but also from its absence to the oppression of minorities, above all in areas where minority languages should – demographically seen – be visible. The LLs of big cities are fruitful research areas due to the mass of linguistic data. The first part of this paper deals with the theoretical and practical research that has been conducted in LL studies so far. A summary of the theory, methodologies and different approaches is given. In the second part I apply the theoretical basis to my own case study. For this, the LLs of two shopping streets in different areas of Hong Kong were examined in 2010. It seems likely that the linguistic competence of English must be rather high in Hong Kong, due to the long-lasting influence of British culture and mentality and the official status of the language. The case study's results are based on empirical data showing the objectively visible presence of English in both examined areas, as well as on two surveys. Those were conducted both openly and anonymously. The surveys are a reinsurance measuring the level of linguistic competence of English in Hong Kong. That level was defined before by an analysis of the LL. Hence, this case study is a new approach to LL analysis which does not end with the description of its material composition (as have done most studies before), but which rather includes its creators by asking in what way people's actual linguistic competence is reflected in Hong Kong's LL.
Immune genes of the major histocompatibility complex (MHC) constitute a central component of the adaptive immune system and play an essential role in parasite resistance and associated life-history strategies. In addition to pathogen-mediated selection also sexual selection mechanisms have been identified as the main drivers of the typically-observed high levels of polymorphism in functionally important parts of the MHC. The recognition of the individual MHC constitution is presumed to be mediated through olfactory cues. Indeed, MHC genes are in physical linkage with olfactory receptor genes and alter the individual body odour. Moreover, they are expressed on sperm and trophoplast cells. Thus, MHC-mediated sexual selection processes might not only act in direct mate choice decisions, but also through cryptic processes during reproduction. Bats (Chiroptera) represent the second largest mammalian order and have been identified as important vectors of newly emerging infectious diseases affecting humans and wildlife. In addition, they are interesting study subjects in evolutionary ecology in the context of olfactory communication, mate choice and associated fitness benefits. Thus, it is surprising that Chiroptera belong to the least studied mammalian taxa in terms of their MHC evolution. In my doctoral thesis I aimed to gain insights in the evolution and diversity pattern of functional MHC genes in some of the major New World bat families by establishing species-specific primers through genome-walking into unknown flanking parts of familiar sites. Further, I took a free-ranging population of the lesser bulldog bat (Noctilio albiventris) in Panama as an example to understand the functional importance of the individual MHC constitution in parasite resistance and reproduction as well as the possible underlying selective forces shaping the observed diversity. My studies indicated that the typical MHC characteristics observed in other mammalian orders, like evidence for balancing and positive selection as well as recombination and gene conversion events, are also present in bats shaping their MHC diversity. I found a wide range of copy number variation of expressed DRB loci in the investigated species. In Saccopteryx bilineata, a species with a highly developed olfactory communication system, I found an exceptionally high number of MHC loci duplications generating high levels of variability at the individual level, which has never been described for any other mammalian species so far. My studies included for the first time phylogenetic relationships of MHC genes in bats and I found signs for a family-specific independent mode of evolution of duplicated genes, regardless whether the highly variable exon 2 (coding for the antigen binding region of the molecule) or more conserved exons (3, 4; encoding protein stabilizing parts) were considered indicating a monophyletic origin of duplicated loci within families. This result questions the general assumed pattern of MHC evolution in mammals where duplicated genes of different families usually cluster together suggesting that duplication occurred before speciation took place, which implies a trans-species mode of evolution. However, I found a trans-species mode of evolution within genera (Noctilio, Myotis) based on exon 2 signified by an intermingled clustering of DRB alleles. The gained knowledge on MHC sequence evolution in major New World bat families will facilitate future MHC investigations in this order. In the N. albiventris study population, the single expressed MHC class II DRB gene showed high sequence polymorphism, moderate allelic variability and high levels of population-wide heterozygosity. Whereas demographic processes had minor relevance in shaping the diversity pattern, I found clear evidence for parasite-mediated selection. This was evident by historical positive Darwinian selection maintaining diversity in the functionally important antigen binding sites, and by specific MHC alleles which were associated with low and high ectoparasite burden according to predictions of the ‘frequency dependent selection hypothesis’. Parasite resistance has been suggested to play an important role in mediating costly life history trade-offs leading to e.g. MHC- mediated benefits in sexual selection. The ‘good genes model’ predicts that males with a genetically well-adapted immune system in defending harmful parasites have the ability to allocate more resources to reproductive effort. I found support for this prediction since non-reproductive adult N. albiventris males carried more often an allele associated with high parasite loads, which differentiated them genetically from reproductively active males as well as from subadults, indicating a reduced transmission of this allele in subsequent generations. In addition, they suffered from increased ectoparasite burden which presumably reduced resources to invest in reproduction. Another sign for sexual selection was the observation of gender-specific difference in heterozygosity, with females showing lower levels of heterozygosity than males. This signifies that the sexes differ in their selection pressures, presumably through MHC-mediated molecular processes during reproduction resulting in a male specific heterozygosity advantage. My data make clear that parasite-mediated selection and sexual selection are interactive and operate together to form diversity at the MHC. Furthermore, my thesis is one of the rare studies contributing to fill the gap between MHC-mediated effects on co-evolutionary processes in parasite-host-interactions and on aspects of life-history evolution.
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
Ten polyQ (polyglutamine) diseases constitute a group of hereditary, neurodegenerative, lethal disorders, characterized by neuronal loss and motor and cognitive impairments. The only common molecular feature of polyQ disease-associated proteins is the homopolymeric polyglutamine repeat. The pathological expansion of polyQ tract invariably leads to protein misfolding and aggregation, resulting in formation of the fibrillar intraneuronal deposits (aggregates) of the disease protein. The polyQ-related cellular toxicity is currently attributed to early, small, soluble aggregate species (oligomers), whereas end-stage, fibrillar, insoluble aggregates are considered to be benign. In the complex cellular environment aggregation and toxicity of mutant polyQ proteins can be affected by both the sequences of the corresponding disease protein (factors acting in cis) and the cellular environment (factors acting in trans). Additionally, the nucleus has been suggested to be the primary site of toxicity in the polyQ-based neurodegeneration. In this study, the dynamics and structure of nuclear and cytoplasmic inclusions were examined to determine the intrinsic and extrinsic factors influencing the cellular aggregation of atrophin-1, a protein implicated in the pathology of dentatorubral-pallidoluysian atrophy (DRPLA), a polyQ-based disease with complex clinical features. Dynamic imaging, combined with biochemical and biophysical approaches revealed a large heterogeneity in the dynamics of atrophin-1 within the nuclear inclusions compared with the compact and immobile cytoplasmic aggregates. At least two types of inclusions of polyQ-expanded atrophin-1 with different mobility of the molecular species and ability to exchange with the surrounding monomer pool coexist in the nucleus of the model cell system, neuroblastoma N2a cells. Furthermore, our novel cross-seeding approach which allows for monitoring of the architecture of the aggregate core directly in the cell revealed an evolution of the aggregate core of the polyQ-expanded ATN1 from one composed of the sequences flanking the polyQ domain at early aggregation phases to one dominated by the polyQ stretch in the later aggregation phase. Intriguingly, these changes in the aggregate core architecture of nuclear and cytoplasmic inclusions mirrored the changes in the protein dynamics and physico-chemical properties of the aggregates in the aggregation time course. 2D-gel analyses followed by MALDI-TOF MS (matrix-assisted laser desorption/ionization time of flight mass spectrometry) were used to detect alterations in the interaction partners of the pathological ATN1 variant compared to the non-pathological ATN1. Based on these results, we propose that the observed complexity in the dynamics of the nuclear inclusions provides a molecular explanation for the enhanced cellular toxicity of the nuclear aggregates in polyQ-based neurodegeneration.
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
During reading, saccadic eye movements are generated to shift words into the center of the visual field for lexical processing. Recently, Krugel and Engbert (Vision Research 50:1532-1539, 2010) demonstrated that within-word fixation positions are largely shifted to the left after skipped words. However, explanations of the origin of this effect cannot be drawn from normal reading data alone. Here we show that the large effect of skipped words on the distribution of within-word fixation positions is primarily based on rather subtle differences in the low-level visual information acquired before saccades. Using arrangements of "x" letter strings, we reproduced the effect of skipped character strings in a highly controlled single-saccade task. Our results demonstrate that the effect of skipped words in reading is the signature of a general visuomotor phenomenon. Moreover, our findings extend beyond the scope of the widely accepted range-error model, which posits that within-word fixation positions in reading depend solely on the distances of target words. We expect that our results will provide critical boundary conditions for the development of visuomotor models of saccade planning during reading.
Thermal and quantum fluctuations of the electromagnetic near field of atoms and macroscopic bodies play a key role in quantum electrodynamics (QED), as in the Lamb shift. They lead, e.g., to atomic level shifts, dispersion interactions (Van der Waals-Casimir-Polder interactions), and state broadening (Purcell effect) because the field is subject to boundary conditions. Such effects can be observed with high precision on the mesoscopic scale which can be accessed in micro-electro-mechanical systems (MEMS) and solid-state-based magnetic microtraps for cold atoms (‘atom chips’). A quantum field theory of atoms (molecules) and photons is adapted to nonequilibrium situations. Atoms and photons are described as fully quantized while macroscopic bodies can be included in terms of classical reflection amplitudes, similar to the scattering approach of cavity QED. The formalism is applied to the study of nonequilibrium two-body potentials. We then investigate the impact of the material properties of metals on the electromagnetic surface noise, with applications to atomic trapping in atom-chip setups and quantum computing, and on the magnetic dipole contribution to the Van der Waals-Casimir-Polder potential in and out of thermal equilibrium. In both cases, the particular properties of superconductors are of high interest. Surface-mode contributions, which dominate the near-field fluctuations, are discussed in the context of the (partial) dynamic atomic dressing after a rapid change of a system parameter and in the Casimir interaction between two conducting plates, where nonequilibrium configurations can give rise to repulsion.
Flux-P
(2012)
Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.
This thesis deals with two theories of international trade: the theory of comparative advantage, which is connected to the name David Ricardo and is dominating current trade theory, and Adam Smith’s theory of absolute advantage. Both theories are compared and their assumptions are scrutinised. The former theory is rejected on theoretical and empirical grounds in favour of the latter. On the basis of the theory of absolute advantage, developments of free international trade are examined, whereby the focus is on trade between industrial and underdeveloped countries. The main conclusions are that trade patterns are determined by absolute production cost advantages and that the gap between developed and poor countries is not reduced but rather increased by free trade.
The underlying motivation for the work carried out for this thesis was the growing need for more sustainable technologies. The aim was to synthesize a “palette” of functional nanomaterials using the established technique of hydrothermal carbonization (HTC). The incredible diversity of HTC was demonstrated together with small but steady advances in how HTC can be manipulated to tailor material properties for specific applications. Two main strategies were used to modify the materials obtained by HTC of glucose, a model precursor representing biomass. The first approach was the introduction of heteroatoms, or “doping” of the carbon framework. Sulfur was for the first time introduced as a dopant in hydrothermal carbon. The synthesis of sulfur and sulfur/nitrogen doped microspheres was presented whereby it was shown that the binding state of sulfur could be influenced by varying the type of sulfur source. Pyrolysis may additionally be used to tune the heteroatom binding states which move to more stable motifs with increasing pyrolysis temperature. Importantly, the presence of aromatic binding states in the as synthesized hydrothermal carbon allows for higher heteroatom retention levels after pyrolysis and hence more efficient use of dopant sources. In this regard, HTC may be considered as an “intermediate” step in the formation of conductive heteroatom doped carbon. To assess the novel hydrothermal carbons in terms of their potential for electrochemical applications, materials with defined nano-architectures and high surface areas were synthesized via templated, as well as template-free routes. Sulfur and/or nitrogen doped carbon hollow spheres (CHS) were synthesized using a polystyrene hard templating approach and doped carbon aerogels (CA) were synthesized using either the albumin directed or borax-mediated hydrothermal carbonization of glucose. Electrochemical testing showed that S/N dual doped CHS and aerogels derived via the albumin approach exhibited superior catalytic performance compared to solely nitrogen or sulfur doped counterparts in the oxygen reduction reaction (ORR) relevant to fuel cells. Using the borax mediated aerogel formation, nitrogen content and surface area could be tuned and a carbon aerogel was engineered to maximize electrochemical performance. The obtained sample exhibited drastically improved current densities compared to a platinum catalyst (but lower onset potential), as well as excellent long term stability. In the second approach HTC was carried out at elevated temperatures (550 °C) and pressure (50 bar), corresponding to the superheated vapor regime (htHTC). It was demonstrated that the carbon materials obtained via htHTC are distinct from those obtained via ltHTC and subsequent pyrolysis at 550 °C. No difference in htHTC-derived material properties could be observed between pentoses and hexoses. The material obtained from a polysaccharide exhibited a slightly lower degree of carbonization but was otherwise similar to the monosaccharide derived samples. It was shown that in addition to thermally induced carbonization at 550 °C, the SHV environment exhibits a catalytic effect on the carbonization process. The resulting materials are chemically inert (i.e. they contain a negligible amount of reactive functional groups) and possess low surface area and electronic conductivity which distinguishes them from carbon obtained from pyrolysis. Compared to the materials presented in the previous chapters on chemical modifications of hydrothermal carbon, this makes them ill-suited candidates for electronic applications like lithium ion batteries or electrocatalysts. However, htHTC derived materials could be interesting for applications that require chemical inertness but do not require specific electronic properties. The final section of this thesis therefore revisited the latex hard templating approach to synthesize carbon hollow spheres using htHTC. However, by using htHTC it was possible to carry out template removal in situ because the second heating step at 550 °C was above the polystyrene latex decomposition temperature. Preliminary tests showed that the CHS could be dispersed in an aqueous polystyrene latex without monomer penetrating into the hollow sphere voids. This leaves the stagnant air inside the CHS intact which in turn is promising for their application in heat and sound insulating coatings. Overall the work carried out in this thesis represents a noteworthy development in demonstrating the great potential of sustainable carbon materials.
The distinctness of, and overlap between, pea genotypes held in several Pisum germplasm collections has been used to determine their relatedness and to test previous ideas about the genetic diversity of Pisum. Our characterisation of genetic diversity among 4,538 Pisum accessions held in 7 European Genebanks has identified sources of novel genetic variation, and both reinforces and refines previous interpretations of the overall structure of genetic diversity in Pisum. Molecular marker analysis was based upon the presence/absence of polymorphism of retrotransposon insertions scored by a high-throughput microarray and SSAP approaches. We conclude that the diversity of Pisum constitutes a broad continuum, with graded differentiation into sub-populations which display various degrees of distinctness. The most distinct genetic groups correspond to the named taxa while the cultivars and landraces of Pisum sativum can be divided into two broad types, one of which is strongly enriched for modern cultivars. The addition of germplasm sets from six European Genebanks, chosen to represent high diversity, to a single collection previously studied with these markers resulted in modest additions to the overall diversity observed, suggesting that the great majority of the total genetic diversity collected for the Pisum genus has now been described. Two interesting sources of novel genetic variation have been identified. Finally, we have proposed reference sets of core accessions with a range of sample sizes to represent Pisum diversity for the future study and exploitation by researchers and breeders.
Particles in Saturn’s main rings range in size from dust to even kilometer-sized objects. Their size distribution is thought to be a result of competing accretion and fragmentation processes. While growth is naturally limited in tidal environments, frequent collisions among these objects may contribute to both accretion and fragmentation. As ring particles are primarily made of water ice attractive surface forces like adhesion could significantly influence these processes, finally determining the resulting size distribution. Here, we derive analytic expressions for the specific self-energy Q and related specific break-up energy Q⋆ of aggregates. These expressions can be used for any aggregate type composed of monomeric constituents. We compare these expressions to numerical experiments where we create aggregates of various types including: regular packings like the face-centered cubic (fcc), Ballistic Particle Cluster Aggregates (BPCA), and modified BPCAs including e.g. different constituent size distributions. We show that accounting for attractive surface forces such as adhesion a simple approach is able to: a) generally account for the size dependence of the specific break-up energy for fragmentation to occur reported in the literature, namely the division into “strength” and “gravity” regimes, and b) estimate the maximum aggregate size in a collisional ensemble to be on the order of a few meters, consistent with the maximum aggregate size observed in Saturn’s rings of about 10m.
Cellulose is the most abundant biopolymer on earth and the main load-bearing structure in plant cell walls. Cellulose microfibrils are laid down in a tight parallel array, surrounding plant cells like a corset. Orientation of microfibrils determines the direction of growth by directing turgor pressure to points of expansion (Somerville et al., 2004). Hence, cellulose deficient mutants usually show cell and organ swelling due to disturbed anisotropic cell expansion (reviewed in Endler and Persson, 2011). How do cellulose microfibrils gain their parallel orientation? First experiments in the 1960s suggested, that cortical microtubules aid the cellulose synthases on their way around the cell (Green, 1962; Ledbetter and Porter, 1963). This was proofed in 2006 through life cell imaging (Paredez et al., 2006). However, how this guidance was facilitated, remained unknown. Through a combinatory approach, including forward and reverse genetics together with advanced co-expression analysis, we identified pom2 as a cellulose deficient mutant. Map- based cloning revealed that the gene locus of POM2 corresponded to CELLULOSE SYNTHASE INTERACTING 1 (CSI1). Intriguingly, we previously found the CSI1 protein to interact with the putative cytosolic part of the primary cellulose synthases in a yeast-two-hybrid screen (Gu et al., 2010). Exhaustive cell biological analysis of the POM2/CSI1 protein allowed to determine its cellular function. Using spinning disc confocal microscopy, we could show that in the absence of POM2/CSI1, cellulose synthase complexes lose their microtubule-dependent trajectories in the plasma membrane. The loss of POM2/CSI1, however does not influence microtubule- dependent delivery of cellulose synthases (Bringmann et al., 2012). Consequently, POM2/CSI1 acts as a bridging protein between active cellulose synthases and cortical microtubules. This thesis summarizes three publications of the author, regarding the identification of proteins that connect cellulose synthases to the cytoskeleton. This involves the development of bioinformatics tools allowing candidate gene prediction through co-expression studies (Mutwil et al., 2009), identification of candidate genes through interaction studies (Gu et al., 2010), and determination of the cellular function of the candidate gene (Bringmann et al., 2012).
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
Much previous experimental research on morphological processing has focused on surface and meaning-level properties of morphologically complex words, without paying much attention to the morphological differences between inflectional and derivational processes. Realization-based theories of morphology, for example, assume specific morpholexical representations for derived words that distinguish them from the products of inflectional or paradigmatic processes. The present study reports results from a series of masked priming experiments investigating the processing of inflectional and derivational phenomena in native (L1) and non-native (L2) speakers in a non-Indo-European language, Turkish. We specifically compared regular (Aorist) verb inflection with deadjectival nominalization, both of which are highly frequent, productive and transparent in Turkish. The experiments demonstrated different priming patterns for inflection and derivation, specifically within the L2 group. Implications of these findings are discussed both for accounts of L2 morphological processing and for the controversial linguistic distinction between inflection and derivation.
Irrwege der Klimapolitik
(2012)
Inhalt I. Einleitung II. Es gibt kein Normalklima III. Folgen des Klimawandel IV. Folgen der Klimapolitik V. Schlußfolgerungen
This article considers one of the major weaknesses in the existing historiography of Irish Jewry, the failure to consider the true extent and impact of antisemitism on Ireland’s Jewish community. This is illustrated through a brief survey of one small area of the Irish-Jewish narrative, the Jewish relationship with Irish nationalist politics. Throughout, the focus remains on the need for a fresh approach to the sources and the issues at hand, in order to create a more holistic, objective and inclusive history of the Jewish experience in Ireland.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
Although all bilinguals encounter cross-language interference (CLI), some bilinguals are more susceptible to interference than others. Here, we report on language performance of late bilinguals (Russian/German) on two bilingual tasks (interview, verbal fluency), their language use and switching habits. The only between-group difference was CLI: one group consistently produced significantly more errors of CLI on both tasks than the other (thereby replicating our findings from a bilingual picture naming task). This striking group difference in language control ability can only be explained by differences in cognitive control, not in language proficiency or language mode.
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
Complex networks have been successfully employed to represent different levels of biological systems, ranging from gene regulation to protein-protein interactions and metabolism. Network-based research has mainly focused on identifying unifying structural properties, including small average path length, large clustering coefficient, heavy-tail degree distribution, and hierarchical organization, viewed as requirements for efficient and robust system architectures. Existing studies estimate the significance of network properties using a generic randomization scheme - a Markov-chain switching algorithm - which generates unrealistic reactions in metabolic networks, as it does not account for the physical principles underlying metabolism. Therefore, it is unclear whether the properties identified with this generic approach are related to the functions of metabolic networks. Within this doctoral thesis, I have developed an algorithm for mass-balanced randomization of metabolic networks, which runs in polynomial time and samples networks almost uniformly at random. The properties of biological systems result from two fundamental origins: ubiquitous physical principles and a complex history of evolutionary pressure. The latter determines the cellular functions and abilities required for an organism’s survival. Consequently, the functionally important properties of biological systems result from evolutionary pressure. By employing randomization under physical constraints, the salient structural properties, i.e., the smallworld property, degree distributions, and biosynthetic capabilities of six metabolic networks from all kingdoms of life are shown to be independent of physical constraints, and thus likely to be related to evolution and functional organization of metabolism. This stands in stark contrast to the results obtained from the commonly applied switching algorithm. In addition, a novel network property is devised to quantify the importance of reactions by simulating the impact of their knockout. The relevance of the identified reactions is verified by the findings of existing experimental studies demonstrating the severity of the respective knockouts. The results suggest that the novel property may be used to determine the reactions important for viability of organisms. Next, the algorithm is employed to analyze the dependence between mass balance and thermodynamic properties of Escherichia coli metabolism. The thermodynamic landscape in the vicinity of the metabolic network reveals two regimes of randomized networks: those with thermodynamically favorable reactions, similar to the original network, and those with less favorable reactions. The results suggest that there is an intrinsic dependency between thermodynamic favorability and evolutionary optimization. The method is further extended to optimizing metabolic pathways by introducing novel chemically feasibly reactions. The results suggest that, in three organisms of biotechnological importance, introduction of the identified reactions may allow for optimizing their growth. The approach is general and allows identifying chemical reactions which modulate the performance with respect to any given objective function, such as the production of valuable compounds or the targeted suppression of pathway activity. These theoretical developments can find applications in metabolic engineering or disease treatment. The developed randomization method proposes a novel approach to measuring the significance of biological network properties, and establishes a connection between large-scale approaches and biological function. The results may provide important insights into the functional principles of metabolic networks, and open up new possibilities for their engineering.
This paper examines and develops matrix methods to approximate the eigenvalues of a fourth order Sturm-Liouville problem subjected to a kind of fixed boundary conditions, furthermore, it extends the matrix methods for a kind of general boundary conditions. The idea of the methods comes from finite difference and Numerov's method as well as boundary value methods for second order regular Sturm-Liouville problems. Moreover, the determination of the correction term formulas of the matrix methods are investigated in order to obtain better approximations of the problem with fixed boundary conditions since the exact eigenvalues for q = 0 are known in this case. Finally, some numerical examples are illustrated.
We study maximal subsemigroups of the monoid T(X) of all full transformations on the set X = N of natural numbers containing a given subsemigroup W of T(X), where each element of a given set U is a generator of T(X) modulo W. This note continues the study of maximal subsemigroups of the monoid of all full transformations on an infinite set.
MDE techniques are more and more used in praxis. However, there is currently a lack of detailed reports about how different MDE techniques are integrated into the development and combined with each other. To learn more about such MDE settings, we performed a descriptive and exploratory field study with SAP, which is a worldwide operating company with around 50.000 employees and builds enterprise software applications. This technical report describes insights we got during this study. For example, we identified that MDE settings are subject to evolution. Finally, this report outlines directions for future research to provide practical advises for the application of MDE settings.
Background: Detection of immunogenic proteins remains an important task for life sciences as it nourishes the understanding of pathogenicity, illuminates new potential vaccine candidates and broadens the spectrum of biomarkers applicable in diagnostic tools. Traditionally, immunoscreenings of expression libraries via polyclonal sera on nitrocellulose membranes or screenings of whole proteome lysates in 2-D gel electrophoresis are performed. However, these methods feature some rather inconvenient disadvantages. Screening of expression libraries to expose novel antigens from bacteria often lead to an abundance of false positive signals owing to the high cross reactivity of polyclonal antibodies towards the proteins of the expression host. A method is presented that overcomes many disadvantages of the old procedures.
Results: Four proteins that have previously been described as immunogenic have successfully been assessed immunogenic abilities with our method. One protein with no known immunogenic behaviour before suggested potential immunogenicity. We incorporated a fusion tag prior to our genes of interest and attached the expressed fusion proteins covalently on microarrays. This enhances the specific binding of the proteins compared to nitrocellulose. Thus, it helps to reduce the number of false positives significantly. It enables us to screen for immunogenic proteins in a shorter time, with more samples and statistical reliability. We validated our method by employing several known genes from Campylobacter jejuni NCTC 11168.
Conclusions: The method presented offers a new approach for screening of bacterial expression libraries to illuminate novel proteins with immunogenic features. It could provide a powerful and attractive alternative to existing methods and help to detect and identify vaccine candidates, biomarkers and potential virulence-associated factors with immunogenic behaviour furthering the knowledge of virulence and pathogenicity of studied bacteria.
Climate is the principal driving force of hydrological extremes like floods and attributing generating mechanisms is an essential prerequisite for understanding past, present, and future flood variability. Successively enhanced radiative forcing under global warming enhances atmospheric water-holding capacity and is expected to increase the likelihood of strong floods. In addition, natural climate variability affects the frequency and magnitude of these events on annual to millennial time-scales. Particularly in the mid-latitudes of the Northern Hemisphere, correlations between meteorological variables and hydrological indices suggest significant effects of changing climate boundary conditions on floods. To date, however, understanding of flood responses to changing climate boundary conditions is limited due to the scarcity of hydrological data in space and time. Exploring paleoclimate archives like annually laminated (varved) lake sediments allows to fill this gap in knowledge offering precise dated time-series of flood variability for millennia. During river floods, detrital catchment material is eroded and transported in suspension by fluid turbulence into downstream lakes. In the water body the transport capacity of the inflowing turbidity current successively diminishes leading to the deposition of detrital layers on the lake floor. Intercalated into annual laminations these detrital layers can be dated down to seasonal resolution. Microfacies analyses and X-ray fluorescence scanning (µ-XRF) at 200 µm resolution were conducted on the varved Mid- to Late Holocene interval of two sediment profiles from pre-alpine Lake Ammersee (southern Germany) located in a proximal (AS10prox) and distal (AS10dist) position towards the main tributary River Ammer. To shed light on sediment distribution within the lake, particular emphasis was (1) the detection of intercalated detrital layers and their micro-sedimentological features, and (2) intra-basin correlation of these deposits. Detrital layers were dated down to the season by microscopic varve counting and determination of the microstratigraphic position within a varve. The resulting chronology is verified by accelerator mass spectrometry (AMS) 14C dating of 14 terrestrial plant macrofossils. Since ~5500 varve years before present (vyr BP), in total 1573 detrital layers were detected in either one or both of the investigated sediment profiles. Based on their microfacies, geochemistry, and proximal-distal deposition pattern, detrital layers were interpreted as River Ammer flood deposits. Calibration of the flood layer record using instrumental daily River Ammer runoff data from AD 1926 to 1999 proves the flood layer succession to represent a significant time-series of major River Ammer floods in spring and summer, the flood season in the Ammersee region. Flood layer frequency trends are in agreement with decadal variations of the East Atlantic-Western Russia (EA-WR) atmospheric pattern back to 200 yr BP (end of the used atmospheric data) and solar activity back to 5500 vyr BP. Enhanced flood frequency corresponds to the negative EA-WR phase and reduced solar activity. These common links point to a central role of varying large-scale atmospheric circulation over Europe for flood frequency in the Ammersee region and suggest that these atmospheric variations, in turn, are likely modified by solar variability during the past 5500 years. Furthermore, the flood layer record indicates three shifts in mean layer thickness and frequency of different manifestation in both sediment profiles at ~5500, ~2800, and ~500 vyr BP. Combining information from both sediment profiles enabled to interpret these shifts in terms of stepwise increases in mean flood intensity. Likely triggers of these shifts are gradual reduction of Northern Hemisphere orbital summer forcing and long-term solar activity minima. Hypothesized atmospheric response to this forcing is hemispheric cooling that enhances equator-to-pole temperature gradients and potential energy in the troposphere. This energy is transferred into stronger westerly cyclones, more extreme precipitation, and intensified floods at Lake Ammersee. Interpretation of flood layer frequency and thickness data in combination with reanalysis models and time-series analysis allowed to reconstruct the flood history and to decipher flood triggering climate mechanisms in the Ammersee region throughout the past 5500 years. Flood frequency and intensity are not stationary, but influenced by multi-causal climate forcing of large-scale atmospheric modes on time-scales from years to millennia. These results challenge future projections that propose an increase in floods when Earth warms based only on the assumption of an enhanced hydrological cycle.
It sometimes happens that we finish reading a passage of text just to realize that we have no idea what we just read. During these episodes of mindless reading our mind is elsewhere yet the eyes still move across the text. The phenomenon of mindless reading is common and seems to be widely recognized in lay psychology. However, the scientific investigation of mindless reading has long been underdeveloped. Recent progress in research on mindless reading has been based on self-report measures and on treating it as an all-or-none phenomenon (dichotomy-hypothesis). Here, we introduce the levels-of-inattention hypothesis proposing that mindless reading is graded and occurs at different levels of cognitive processing. Moreover, we introduce two new behavioral paradigms to study mindless reading at different levels in the eye-tracking laboratory. First (Chapter 2), we introduce shuffled text reading as a paradigm to approximate states of weak mindless reading experimentally and compare it to reading of normal text. Results from statistical analyses of eye movements that subjects perform in this task qualitatively support the ‘mindless’ hypothesis that cognitive influences on eye movements are reduced and the ‘foveal load’ hypothesis that the response of the zoom lens of attention to local text difficulty is enhanced when reading shuffled text. We introduce and validate an advanced version of the SWIFT model (SWIFT 3) incorporating the zoom lens of attention (Chapter 3) and use it to explain eye movements during shuffled text reading. Simulations of the SWIFT 3 model provide fully quantitative support for the ‘mindless’ and the ‘foveal load’ hypothesis. They moreover demonstrate that the zoom lens is an important concept to explain eye movements across reading and mindless reading tasks. Second (Chapter 4), we introduce the sustained attention to stimulus task (SAST) to catch episodes when external attention spontaneously lapses (i.e., attentional decoupling or mind wandering) via the overlooking of errors in the text and via signal detection analyses of error detection. Analyses of eye movements in the SAST revealed reduced influences from cognitive text processing during mindless reading. Based on these findings, we demonstrate that it is possible to predict states of mindless reading from eye movement recordings online. That cognition is not always needed to move the eyes supports autonomous mechanisms for saccade initiation. Results from analyses of error detection and eye movements provide support to our levels-of-inattention hypothesis that errors at different levels of the text assess different levels of decoupling. Analyses of pupil size in the SAST (Chapter 5) provide further support to the levels of inattention hypothesis and to the decoupling hypothesis that off-line thought is a distinct mode of cognitive functioning that demands cognitive resources and is associated with deep levels of decoupling. The present work demonstrates that the elusive phenomenon of mindless reading can be vigorously investigated in the cognitive laboratory and further incorporated in the theoretical framework of cognitive science.
Mineral chemistry and thermobarometry of the staurolite-chloritoid schists from Poshtuk, NW Iran
(2012)
The Poshtuk metapelitic rocks in northwestern Iran underwent two main phases of regional and contact metamorphism. Microstructures, textural features and field relations indicate that these rocks underwent a polymetamorphic history. The dominant metamorphic assemblage of the metapelites is garnet, staurolite, chloritoid, chlorite, muscovite and quartz, which grew mainly syntectonically during the later contact metamorphic event. Peak metamorphic conditions of this event took place at 580 ◦ C and ∼ 3–4 kbar, indicating that this event occurred under high-temperature and low-pressure conditions (HT/LP metamorphism), which reflects the high heat flow in this part of the crust. This event is mainly controlled by advective heat input through magmatic intrusions into all levels of the crust. These extensive Eocene metamorphic and magmatic activities can be associated with the early Alpine Orogeny, which resulted in this area from the convergence between the Arabian and Eurasian plates, and the Cenozoic closure of the Tethys oceanic tract(s).
Modelling of environmental change impacts on water resources and hydrological extremes in Germany
(2012)
Water resources, in terms of quantity and quality, are significantly influenced by environmental changes, especially by climate and land use changes. The main objective of the present study is to project climate change impacts on the seasonal dynamics of water fluxes, spatial changes in water balance components as well as the future flood and low flow conditions in Germany. This study is based on the modeling results of the process-based eco-hydrological model SWIM (Soil and Water Integrated Model) driven by various regional climate scenarios on one hand. On the other hand, it is supported by statistical analysis on long-term trends of observed and simulated time series. In addition, this study evaluates the impacts of potential land use changes on water quality in terms of NO3-N load in selected sub-regions of the Elbe basin. In the context of climate change, the actual evapotransipration is likely to increase in most parts of Germany, while total runoff generation may decrease in south and east regions in the scenario period 2051-2060. Water discharge in all six studied large rivers (Ems, Weser, Saale, Danube, Main and Neckar) would be 8 – 30% lower in summer and autumn compared to the reference period (1961 – 1990), and the strongest decline is expected for the Saale, Danube and Neckar. The 50-year low flow is likely to occur more frequently in western, southern and central Germany after 2061 as suggested by more than 80% of the model runs. The current low flow period (from August to September) may be extended until the late autumn at the end of this century. Higher winter flow is expected in all of these rivers, and the increase is most significant for the Ems (about 18%). No general pattern of changes in flood directions can be concluded according to the results driven by different RCMs, emission scenarios and multi-realizations. An optimal agricultural land use and management are essential for the reduction in nutrient loads and improvement of water quality. In the Weiße Elster and Unstrut sub-basins (Elbe), an increase of 10% in the winter rape area can result in 12-19% more NO3-N load in rivers. In contrast, another energy plant, maize, has a moderate effect on the water environment. Mineral fertilizers have a much stronger effect on the NO3-N load than organic fertilizers. Cover crops, which play an important role in the reduction of nitrate losses from fields, should be maintained on cropland. The uncertainty in estimating future high flows and, in particular, extreme floods remain high due to different RCM structures, emission scenarios and multi-realizations. In contrast, the projection of low flows under warmer climate conditions appears to be more pronounced and consistent. The largest source of uncertainty related to NO3-N modelling originates from the input data on the agricultural management.
This thesis aims at a better mechanistic understanding of animal communities. Therefore, an allometry- and individual-based model has been developed which was used to simulate mammal and bird communities in heterogeneous landscapes, and to to better understand their response to landscape changes (habitat loss and fragmentation).
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
Narcissus and Echo
(2012)
George Eliot’s late novel Daniel Deronda tackles big, fundamental political questions that radiate from the societal circumstances of the novel’s production and reach deep into our present-day life. The novel critically analyses the capitalistic, morally flawed and standard-less English society and narrates the title hero’s proto-Zionist mission to found a Jewish nation that re-establishes history, meaning and ethical values. This study attempts to trace the novel’s two models of society and time by bringing them into resonance with the myth of Narcissus and Echo famously rendered by Ovid. The unloving, self-referential, visual Narcissus is read as the model for the capitalistic world of spectacle and speculation. Echo’s loving, memory-bearing voice forms an important part in the construction of the sublating unity of the Jewish nation-to-come. Guided by this resonance between George Eliot’s novel and Ovid’s myth pieces of critical theory and philosophy are woven into the study’s fabric. The resulting analysis dissects and deconstructs the novel’s fascinating and highly complex patterns of conditions of possibility for the fabrication of the redeeming Jewish nation, the very same conditions that the novel presents as the conditions of possibility for narrating a meaningful story.
This work describes the synthesis and characterization of stimuli-responsive polymers made by reversible addition-fragmentation chain transfer (RAFT) polymerization and the investigation of their self-assembly into “smart” hydrogels. In particular the hydrogels were designed to swell at low temperature and could be reversibly switched to a collapsed hydrophobic state by rising the temperature. Starting from two constituents, a short permanently hydrophobic polystyrene (PS) block and a thermo-responsive poly(methoxy diethylene glycol acrylate) (PMDEGA) block, various gelation behaviors and switching temperatures were achieved. New RAFT agents bearing tert-butyl benzoate or benzoic acid groups, were developed for the synthesis of diblock, symmetrical triblock and 3-arm star block copolymers. Thus, specific end groups were attached to the polymers that facilitate efficient macromolecular characterization, e.g by routine 1H-NMR spectroscopy. Further, the carboxyl end-groups allowed functionalizing the various polymers by a fluorophore. Because reports on PMDEGA have been extremely rare, at first, the thermo-responsive behavior of the polymer was investigated and the influence of factors such as molar mass, nature of the end-groups, and architecture, was studied. The use of special RAFT agents enabled the design of polymer with specific hydrophobic and hydrophilic end-groups. Cloud points (CP) of the polymers proved to be sensitive to all molecular variables studied, namely molar mass, nature and number of the end-groups, up to relatively high molar masses. Thus, by changing molecular parameters, CPs of the PMDEGA could be easily adjusted within the physiological interesting range of 20 to 40°C. A second responsivity, namely to light, was added to the PMDEGA system via random copolymerization of MDEGA with a specifically designed photo-switchable azobenzene acrylate. The composition of the copolymers was varied in order to determine the optimal conditions for an isothermal cloud point variation triggered by light. Though reversible light-induced solubility changes were achieved, the differences between the cloud points before and after the irradiation were small. Remarkably, the response to light differed from common observations for azobenzene-based systems, as CPs decreased after UV-irradiation, i.e with increasing content of cis-azobenzene units. The viscosifying and gelling abilities of the various block copolymers made from PS and PMDEGA blocks were studied by rheology. Important differences were observed between diblock copolymers, containing one hydrophobic PS block only, the telechelic symmetrical triblock copolymers made of two associating PS termini, and the star block copolymers having three associating end blocks. Regardless of their hydrophilic block length, diblock copolymers PS11 PMDEGAn were freely flowing even at concentrations as high as 40 wt. %. In contrast, all studied symmetrical triblock copolymers PS8-PMDEGAn-PS8 formed gels at low temperatures and at concentrations as low as 3.5 wt. % at best. When heated, these gels underwent a gel-sol transition at intermediate temperatures, well below the cloud point where phase separation occurs. The gel-sol transition shifted to markedly higher transition temperatures with increasing length of the hydrophilic inner block. This effect increased also with the number of arms, and with the length of the hydrophobic end blocks. The mechanical properties of the gels were significantly altered at the cloud point and liquid-like dispersions were formed. These could be reversibly transformed into hydrogels by cooling. This thesis demonstrates that high molar mass PMDEGA is an easily accessible, presumably also biocompatible and at ambient temperature well water-soluble, non-ionic thermo-responsive polymer. PMDEGA can be easily molecularly engineered via the RAFT method, implementing defined end-groups, and producing different, also complex, architectures, such as amphiphilic triblock and star block copolymers, having an analogous structure to associative telechelics. With appropriate design, such amphiphilic copolymers give way to efficient, “smart” viscosifiers and gelators displaying tunable gelling and mechanical properties.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2012)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types.
We consider compact Riemannian spin manifolds without boundary equipped with orthogonal connections. We investigate the induced Dirac operators and the associated commutative spectral triples. In case of dimension four and totally anti-symmetric torsion we compute the Chamseddine-Connes spectral action, deduce the equations of motions and discuss critical points.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
We say that (weak/strong) time duality holds for continuous time quasi-birth-and-death-processes if, starting from a fixed level, the first hitting time of the next upper level and the first hitting time of the next lower level have the same distribution. We present here a criterion for time duality in the case where transitions from one level to another have to pass through a given single state, the so-called bottleneck property. We also prove that a weaker form of reversibility called balanced under permutation is sufficient for the time duality to hold. We then discuss the general case.
Growing populations, continued economic development, and limited natural resources are critical factors affecting sustainable development. These factors are particularly pertinent in developing countries in which large parts of the population live at a subsistence level and options for sustainable development are limited. Therefore, addressing sustainable land use strategies in such contexts requires that decision makers have access to evidence-based impact assessment tools that can help in policy design and implementation. Ex-ante impact assessment is an emerging field poised at the science-policy interface and is used to assess the potential impacts of policy while also exploring trade-offs between economic, social and environmental sustainability targets. The objective of this study was to operationalise the impact assessment of land use scenarios in the context of developing countries that are characterised by limited data availability and quality. The Framework for Participatory Impact Assessment (FoPIA) was selected for this study because it allows for the integration of various sustainability dimensions, the handling of complexity, and the incorporation of local stakeholder perceptions. FoPIA, which was originally developed for the European context, was adapted to the conditions of developing countries, and its implementation was demonstrated in five selected case studies. In each case study, different land use options were assessed, including (i) alternative spatial planning policies aimed at the controlled expansion of rural-urban development in the Yogyakarta region (Indonesia), (ii) the expansion of soil and water conservation measures in the Oum Zessar watershed (Tunisia), (iii) the use of land conversion and the afforestation of agricultural areas to reduce soil erosion in Guyuan district (China), (iv) agricultural intensification and the potential for organic agriculture in Bijapur district (India), and (v) land division and privatisation in Narok district (Kenya). The FoPIA method was effectively adapted by dividing the assessment into three conceptual steps: (i) scenario development; (ii) specification of the sustainability context; and (iii) scenario impact assessment. A new methodological approach was developed for communicating alternative land use scenarios to local stakeholders and experts and for identifying recommendations for future land use strategies. Stakeholder and expert knowledge was used as the main sources of information for the impact assessment and was complemented by available quantitative data. Based on the findings from the five case studies, FoPIA was found to be suitable for implementing the impact assessment at case study level while ensuring a high level of transparency. FoPIA supports the identification of causal relationships underlying regional land use problems, facilitates communication among stakeholders and illustrates the effects of alternative decision options with respect to all three dimensions of sustainable development. Overall, FoPIA is an appropriate tool for performing preliminary assessments but cannot replace a comprehensive quantitative impact assessment, and FoPIA should, whenever possible, be accompanied by evidence from monitoring data or analytical tools. When using FoPIA for a policy oriented impact assessment, it is recommended that the process should follow an integrated, complementary approach that combines quantitative models, scenario techniques, and participatory methods.
Background
Outcome quality management requires the consecutive registration of defined variables. The aim was to identify relevant parameters in order to objectively assess the in-patient rehabilitation outcome.
Methods
From February 2009 to June 2010 1253 patients (70.9 ± 7.0 years, 78.1% men) at 12 rehabilitation clinics were enrolled. Items concerning sociodemographic data, the impairment group (surgery, conservative/interventional treatment), cardiovascular risk factors, structural and functional parameters and subjective health were tested in respect of their measurability, sensitivity to change and their propensity to be influenced by rehabilitation.
Results
The majority of patients (61.1%) were referred for rehabilitation after cardiac surgery, 38.9% after conservative or interventional treatment for an acute coronary syndrome. Functionally relevant comorbidities were seen in 49.2% (diabetes mellitus, stroke, peripheral artery disease, chronic obstructive lung disease). In three key areas 13 parameters were identified as being sensitive to change and subject to modification by rehabilitation: cardiovascular risk factors (blood pressure, low-density lipoprotein cholesterol, triglycerides), exercise capacity (resting heart rate, maximal exercise capacity, maximal walking distance, heart failure, angina pectoris) and subjective health (IRES-24 (indicators of rehabilitation status): pain, somatic health, psychological well-being and depression as well as anxiety on the Hospital Anxiety and Depression Scale).
Conclusion
The outcome of in-patient rehabilitation in elderly patients can be comprehensively assessed by the identification of appropriate key areas, that is, cardiovascular risk factors, exercise capacity and subjective health. This may well serve as a benchmark for internal and external quality management.
Since available phosphate (Pi) resources in soil are limited, symbiotic interactions between plant roots and arbuscular mycorrhizal (AM) fungi are a widespread strategy to improve plant phosphate nutrition. The repression of AM symbiosis by a high plant Pi-status indicates a link between Pi homeostasis signalling and AM symbiosis development. This assumption is supported by the systemic induction of several microRNA399 (miR399) primary transcripts in shoots and a simultaneous accumulation of mature miR399 in roots of mycorrhizal plants. However, the physiological role of this miR399 expression pattern is still elusive and offers the question whether other miRNAs are also involved in AM symbiosis. Therefore, a deep sequencing approach was applied to investigate miRNA-mediated posttranscriptional gene regulation in M. truncatula mycorrhizal roots. Degradome analysis revealed that 185 transcripts were cleaved by miRNAs, of which the majority encoded transcription factors and disease resistance genes, suggesting a tight control of transcriptional reprogramming and a downregulation of defence responses by several miRNAs in mycorrhizal roots. Interestingly, 45 of the miRNA-cleaved transcripts showed a significant differentially regulated between mycorrhizal and non-mycorrhizal roots. In addition, key components of the Pi homeostasis signalling pathway were analyzed concerning their expression during AM symbiosis development. MtPhr1 overexpression and time course expression data suggested a strong interrelation between the components of the PHR1-miR399-PHO2 signalling pathway and AM symbiosis, predominantly during later stages of symbiosis. In situ hybridizations confirmed accumulation of mature miR399 in the phloem and in arbuscule-containing cortex cells of mycorrhizal roots. Moreover, a novel target of the miR399 family, named as MtPt8, was identified by the above mentioned degradome analysis. MtPt8 encodes a Pi-transporter exclusively transcribed in mycorrhizal roots and its promoter activity was restricted to arbuscule-containing cells. At a low Pi-status, MtPt8 transcript abundance inversely correlated with a mature miR399 expression pattern. Increased MtPt8 transcript levels were accompanied by elevated symbiotic Pi-uptake efficiency, indicating its impact on balancing plant and fungal Pi-acquisition. In conclusion, this study provides evidence for a direct link of the regulatory mechanisms of plant Pi-homeostasis and AM symbiosis at a cell-specific level. The results of this study, especially the interaction of miR399 and MtPt8 provide a fundamental step for future studies of plant-microbe-interactions with regard to agricultural and ecological aspects.
A point process is a mechanism, which realizes randomly locally finite point measures. One of the main results of this thesis is an existence theorem for a new class of point processes with a so called signed Levy pseudo measure L, which is an extension of the class of infinitely divisible point processes. The construction approach is a combination of the classical point process theory, as developed by Kerstan, Matthes and Mecke, with the method of cluster expansions from statistical mechanics. Here the starting point is a family of signed Radon measures, which defines on the one hand the Levy pseudo measure L, and on the other hand locally the point process. The relation between L and the process is the following: this point process solves the integral cluster equation determined by L. We show that the results from the classical theory of infinitely divisible point processes carry over in a natural way to the larger class of point processes with a signed Levy pseudo measure. In this way we obtain e.g. a criterium for simplicity and a characterization through the cluster equation, interpreted as an integration by parts formula, for such point processes. Our main result in chapter 3 is a representation theorem for the factorial moment measures of the above point processes. With its help we will identify the permanental respective determinantal point processes, which belong to the classes of Boson respective Fermion processes. As a by-product we obtain a representation of the (reduced) Palm kernels of infinitely divisible point processes. In chapter 4 we see how the existence theorem enables us to construct (infinitely extended) Gibbs, quantum-Bose and polymer processes. The so called polymer processes seem to be constructed here for the first time. In the last part of this thesis we prove that the family of cluster equations has certain stability properties with respect to the transformation of its solutions. At first this will be used to show how large the class of solutions of such equations is, and secondly to establish the cluster theorem of Kerstan, Matthes and Mecke in our setting. With its help we are able to enlarge the class of Polya processes to the so called branching Polya processes. The last sections of this work are about thinning and splitting of point processes. One main result is that the classes of Boson and Fermion processes remain closed under thinning. We use the results on thinning to identify a subclass of point processes with a signed Levy pseudo measure as doubly stochastic Poisson processes. We also pose the following question: Assume you observe a realization of a thinned point process. What is the distribution of deleted points? Surprisingly, the Papangelou kernel of the thinning, besides a constant factor, is given by the intensity measure of this conditional probability, called splitting kernel.
This study follows the debate in comparative public administration research on the role of advisory arrangements in central governments. The aim of this study is to explain the mechanisms by which these actors gain their alleged role in government decision-making. Hence, it analyses advisory arrangements that are proactively involved in executive decision-making and may compete with the permanent bureaucracy by offering policy advice to political executives. The study argues that these advisory arrangements influence government policy-making by "institutional politics", i.e. by shaping the institutional underpinnings to govern or rather the "rules of the executive game" in order to strengthen their own position or that of their clients. The theoretical argument of this study follows the neo-institutionalist turn in organization theory and defines institutional politics as gradual institutionalization processes between institutions and organizational actors. It applies a broader definition of institutions as sets of regulative, normative and cognitive pillars. Following the "power-distributional approach" such gradual institutionalization processes are influenced by structure-oriented characteristics, i.e. the nature of the objects of institutional politics, in particular the freedom of interpretation in their application, as well as the distinct constraints of the institutional context. In addition, institutional politics are influenced by agency-oriented characteristics, i.e. the ambitions of actors to act as "would-be change agents". These two explanatory dimensions result in four ideal-typical mechanisms of institutional politics: layering, displacement, drift, and conversion, which correspond to four ideal-types of would-be change agents. The study examines the ambitions of advisory arrangements in institutional politics in an exploratory manner, the relevance of the institutional context is analyzed via expectation hypotheses on the effects of four institutional context features that are regarded as relevant in the scholarly debate: (1) the party composition of governments, (2) the structuring principles in cabinet, (3) the administrative tradition, and (4) the formal politicization of the ministerial bureaucracy. The study follows a "most similar systems design" and conducts qualitative case studies on the role of advisory arrangements at the center of German and British governments, i.e. the Prime Minister’s Office and the Ministry of Finance, for a longer period (1969/1970-2005). Three time periods are scrutinized per country; the British case studies examine the role of advisory arrangements at the Cabinet Office, the Prime Minister's Office, and the Ministry of Finance under Prime Ministers Heath (1970-74), Thatcher (1979-87) and Blair (1997-2005). The German case studies study the role of advisory arrangements at the Federal Chancellery and the Federal Ministry of Finance during the Brandt government (1969-74), the Kohl government (1982-1987) and the Schröder government (1998-2005). For the empirical analysis, the results of a document analysis and the findings of 75 semi-structured expert interviews have been triangulated. The comparative analysis reveals different patterns of institutional politics. The German advisory arrangements engaged initially in displacement but turned soon towards layering and drift, i.e. after an initial displacement of the pre-existing institutional underpinnings to govern they laid increasingly new elements onto existing ones and took the non-deliberative decision to neglect the adaption of existing rules of the executive game towards changing environmental demands. The British advisory arrangements were mostly involved in displacement and conversion, despite occasional layering, i.e. they displaced the pre-existing institutional underpinnings to govern with new rules of the executive game and transformed and realigned them, sometimes also layering new elements onto pre-existing ones. The structure- and agency-oriented characteristics explain these patterns of institutional politics. First, the study shows that the institutional context limits the institutional politics in Germany and facilitates the institutional politics in the UK. Second, the freedom of interpreting the application of institutional targets is relevant and could be observed via the different ambitions of advisory arrangements across countries and over time, confirming, third, that the interests of such would-be change agents are likewise important to understand the patterns of institutional politics. The study concludes that the role of advisory arrangements in government policy-making rests not only upon their policy-related, party-political or media-advisory role for political executives, but especially upon their activities in institutional politics, resulting in distinct institutional constraints on all actors in government policy-making – including their own role in these processes.
The present work is devoted to establishing of a new generation of self-healing anti-corrosion coatings for protection of metals. The concept of self-healing anticorrosion coatings is based on the combination of the passive part, represented by the matrix of conventional coating, and the active part, represented by micron-sized capsules loaded with corrosion inhibitor. Polymers were chosen as the class of compounds most suitable for the capsule preparation. The morphology of capsules made of crosslinked polymers, however, was found to be dependent on the nature of the encapsulated liquid. Therefore, a systematic analysis of the morphology of capsules consisting of a crosslinked polymer and a solvent was performed. Three classes of polymers such as polyurethane, polyurea and polyamide were chosen. Capsules made of these polymers and eight solvents of different polarity were synthesized via interfacial polymerization. It was shown that the morphology of the resulting capsules is specific for every polymer-solvent pair. Formation of capsules with three general types of morphology, such as core-shell, compact and multicompartment, was demonstrated by means of Scanning Electron Microscopy. Compact morphology was assumed to be a result of the specific polymer-solvent interactions and be analogues to the process of swelling. In order to verify the hypothesis, pure polyurethane, polyurea and polyamide were synthesized; their swelling behavior in the solvents used as the encapsulated material was investigated. It was shown that the swelling behavior of the polymers in most cases correlates with the capsules morphology. Different morphologies (compact, core-shell and multicompartment) were therefore attributed to the specific polymer-solvent interactions and discussed in terms of “good” and “poor” solvent. Capsules with core-shell morphology are formed when the encapsulated liquid is a “poor” solvent for the chosen polymer while compact morphologies are formed when the solvent is “good”. Multicompartment morphology is explained by the formation of infinite networks or gelation of crosslinked polymers. If gelation occurs after the phase separation in the system is achieved, core-shell morphology is present. If gelation of the polymer occurs far before crosslinking is accomplished, further condensation of the polymer due to the crosslinking may lead to the formation of porous or multicompartment morphologies. It was concluded that in general, the morphology of capsules consisting of certain polymer-solvent pairs can be predicted on the basis of polymer-solvent behavior. In some cases, the swelling behavior and morphology may not match. The reasons for that are discussed in detail in the thesis. The discussed approach is only capable of predicting capsule morphology for certain polymer-solvent pairs. In practice, the design of the capsules assumes the trial of a great number of polymer-solvent combinations; more complex systems consisting of three, four or even more components are often used. Evaluation of the swelling behavior of each component pair of such systems becomes unreasonable. Therefore, exploitation of the solubility parameter approach was found to be more useful. The latter allows consideration of the properties of each single component instead of the pair of components. In such a manner, the Hansen Solubility Parameter (HSP) approach was used for further analysis. Solubility spheres were constructed for polyurethane, polyurea and polyamide. For this a three-dimensional graph is plotted with dispersion, polar and hydrogen bonding components of solubility parameter, obtained from literature, as the orthogonal axes. The HSP of the solvents are used as the coordinates for the points on the HSP graph. Then a sphere with a certain radius is located on a graph, and the “good” solvents would be located inside the sphere, while the “poor” ones are located outside. Both the location of the sphere center and the sphere radius should be fitted according to the information on polymer swelling behavior in a number of solvents. According to the existing correlation between the capsule morphology and swelling behavior of polymers, the solvents located inside the solubility sphere of a polymer give capsules with compact morphologies. The solvents located outside the solubility sphere of the solvent give either core-shell or multicompartment capsules in combination with the chosen polymer. Once the solubility sphere of a polymer is found, the solubility/swelling behavior is approximated to all possible substances. HSP theory allows therefore prediction of polymer solubility/swelling behavior and consequently the capsule morphology for any given substance with known HSP parameters on the basis of limited data. The latter makes the theory so attractive for application in chemistry and technology, since the choice of the system components is usually performed on the basis of a large number of different parameters that should mutually match. Even slight change of the technology sometimes leads to the necessity to find the analogue of this or that solvent in a sense of solvency but carrying different chemistry. Usage of the HSP approach in this case is indispensable. In the second part of the work examples of the HSP application for the fabrication of capsules with on-demand-morphology are presented. Capsules with compact or core-shell morphology containing corrosion inhibitors were synthesized. Thus, alkoxysilanes possessing long hydrophobic tail, combining passivating and water-repelling properties, were encapsulated in polyurethane shell. The mechanism of action of the active material required core-shell morphology of the capsules. The new hybrid corrosion inhibitor, cerium diethylhexyl phosphate, was encapsulated in polyamide shells in order to facilitate the dispersion of the substance and improve its adhesion to the coating matrix. The encapsulation of commercially available antifouling agents in polyurethane shells was carried out in order to control its release behavior and colloidal stability. Capsules with compact morphology made of polyurea containing the liquid corrosion inhibitor 2-methyl benzothiazole were synthesized in order to improve the colloidal stability of the substance. Capsules with compact morphology allow slower release of the liquid encapsulated material compared to the core-shell ones. If the “in-situ” encapsulation is not possible due to the reaction of the oil-soluble monomer with the encapsulated material, a solution was proposed: loading of the capsules should be performed after monomer deactivation due to the accomplishment of the polymerization reaction. Capsules of desired morphologies should be preformed followed by the loading step. In this way, compact polyurea capsules containing the highly effective but chemically active corrosion inhibitors 8-hydroxyquinoline and benzotriazole were fabricated. All the resulting capsules were successfully introduced into model coatings. The efficiency of the resulting “smart” self-healing anticorrosion coatings on steel and aluminium alloy of the AA-2024 series was evaluated using characterization techniques such as Scanning Vibrating Electron Spectroscopy, Electrochemical Impedance Spectroscopy and salt-spray chamber tests.