Refine
Has Fulltext
- yes (212) (remove)
Year of publication
- 2014 (212) (remove)
Document Type
- Postprint (97)
- Doctoral Thesis (73)
- Article (13)
- Preprint (13)
- Monograph/Edited Volume (10)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (212) (remove)
Keywords
- anomalous diffusion (5)
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- living cells (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- cloud computing (3)
Institute
- Institut für Chemie (36)
- Institut für Physik und Astronomie (23)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Biochemie und Biologie (20)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (16)
- Institut für Mathematik (16)
- Humanwissenschaftliche Fakultät (14)
- Institut für Geowissenschaften (12)
- Bürgerliches Recht (9)
- Wirtschaftswissenschaften (6)
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
The Beruriah Incident
(2014)
The story known as the Beruriah Incident, which appears in Rashi’s commentary on bAvodah Zarah 18b (related to ATU types 920A* and 823A*), describes the failure and tragic end of R. Meir and his wife Beruriah, two tannaic role-models. This article examines the authenticity of the story by tracking the method of distribution in traditional Jewish society before the modern era, and comparing the story’s components with rabbinic literature and international folklore.
The development of the current liturgical music used in the Belgrade synagogue is (in the last decades) heavily influenced by foreign traditions (mostly levantine) that are brought to Belgrade by modern communication systems. Therefore it is nearly impossible to speak of a status quo that might be possibly obsolete by tomorrow – at least with respect to the melodies. The great changes within the liturgical music occurred not due to acculturation into the Serbian majority but due to the personal preferences of the religious leaders of the Belgrade Jews. The alterations are a conscious process which is precisely the consequence of the musical taste of the local Rabbi and Cantor and not occurring autonomously. In order to understand the new nusah sepharadiyerushalmi that took the place of the forlorn nusah after the downfall of the Communist regime it is deemed necessary to look towards Israel where the rite developed.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
The zero-noise limit of differential equations with singular coefficients is investigated for the first time in the case when the noise is a general alpha-stable process. It is proved that extremal solutions are selected and the probability of selection is computed. Detailed analysis of the characteristic function of an exit time form on the half-line is performed, with a suitable decomposition in small and large jumps adapted to the singular drift.
When azobenzene-modified photosensitive polymer films are irradiated with light interference patterns, topographic variations in the film develop that follow the electric field vector distribution resulting in the formation of surface relief grating (SRG). The exact correspondence of the electric field vector orientation in interference pattern in relation to the presence of local topographic minima or maxima of SRG is in general difficult to determine. In my thesis, we have established a systematic procedure to accomplish the correlation between different interference patterns and the topography of SRG. For this, we devise a new setup combining an atomic force microscope and a two-beam interferometer (IIAFM). With this set-up, it is possible to track the topography change in-situ, while at the same time changing polarization and phase of the impinging interference pattern. To validate our results, we have compared two photosensitive materials named in short as PAZO and trimer. This is the first time that an absolute correspondence between the local distribution of electric field vectors of interference pattern and the local topography of the relief grating could be established exhaustively. In addition, using our IIAFM we found that for a certain polarization combination of two orthogonally polarized interfering beams namely SP (↕, ↔) interference pattern, the topography forms SRG with only half the period of the interference patterns. Exploiting this phenomenon we are able to fabricate surface relief structures below diffraction limit with characteristic features measuring only 140 nm, by using far field optics with a wavelength of 491 nm. We have also probed for the stresses induced during the polymer mass transport by placing an ultra-thin gold film on top (5–30 nm). During irradiation, the metal film not only deforms along with the SRG formation, but ruptures in regular and complex manner. The morphology of the cracks differs strongly depending on the electric field distribution in the interference pattern even when the magnitude and the kinetic of the strain are kept constant. This implies a complex local distribution of the opto-mechanical stress along the topography grating. The neutron reflectivity measurements of the metal/polymer interface indicate the penetration of metal layer within the polymer resulting in the formation of bonding layer that confirms the transduction of light induced stresses in the polymer layer to a metal film.
In Chapter 1 of the dissertation, the role of social networks is analyzed as an important determinant in the search behavior of the unemployed. Based on the hypothesis that the unemployed generate information on vacancies through their social network, search theory predicts that individuals with large social networks should experience an increased productivity of informal search, and reduce their search in formal channels. Due to the higher productivity of search, unemployed with a larger network are also expected to have a higher reservation wage than unemployed with a small network. The model-theoretic predictions are tested and confirmed empirically. It is found that the search behavior of unemployed is significantly affected by the presence of social contacts, with larger networks implying a stronger substitution away from formal search channels towards informal channels. The substitution is particularly pronounced for passive formal search methods, i.e., search methods that generate rather non-specific types of job offer information at low relative cost. We also find small but significant positive effects of an increase of the network size on the reservation wage. These results have important implications on the analysis of the job search monitoring or counseling measures that are usually targeted at formal search only. Chapter 2 of the dissertation addresses the labor market effects of vacancy information during the early stages of unemployment. The outcomes considered are the speed of exit from unemployment, the effects on the quality of employment and the short-and medium-term effects on active labor market program (ALMP) participation. It is found that vacancy information significantly increases the speed of entry into employment; at the same time the probability to participate in ALMP is significantly reduced. Whereas the long-term reduction in the ALMP arises in consequence of the earlier exit from unemployment, we also observe a short-run decrease for some labor market groups which suggest that caseworker use high and low intensity activation measures interchangeably which is clearly questionable from an efficiency point of view. For unemployed who find a job through vacancy information we observe a small negative effect on the weekly number of hours worked. In Chapter 3, the long-term effects of participation in ALMP are assessed for unemployed youth under 25 years of age. Complementary to the analysis in Chapter 2, the effects of participation in time- and cost-intensive measures of active labor market policies are examined. In particular we study the effects of job creation schemes, wage subsidies, short-and long-term training measures and measures to promote the participation in vocational training. The outcome variables of interest are the probability to be in regular employment, and participation in further education during the 60 months following program entry. The analysis shows that all programs, except job creation schemes have positive and long-term effects on the employment probability of youth. In the short-run only short-term training measures generate positive effects, as long-term training programs and wage subsidies exhibit significant locking-in'' effects. Measures to promote vocational training are found to increase the probability of attending education and training significantly, whereas all other programs have either no or a negative effect on training participation. Effect heterogeneity with respect to the pre-treatment level education shows that young people with higher pre-treatment educational levels benefit more from participation most programs. However, for longer-term wage subsidies we also find strong positive effects for young people with low initial education levels. The relative benefit of training measures is higher in West than in East Germany. In the evaluation studies of Chapters 2 and 3 semi-parametric balancing methods of Propensity Score Matching (PSM) and Inverse Probability Weighting (IPW) are used to eliminate the effects of counfounding factors that influence both the treatment participation as well as the outcome variable of interest, and to establish a causal relation between program participation and outcome differences. While PSM and IPW are intuitive and methodologically attractive as they do not require parametric assumptions, the practical implementation may become quite challenging due to their sensitivity to various data features. Given the importance of these methods in the evaluation literature, and the vast number of recent methodological contributions in this field, Chapter 4 aims to reduce the knowledge gap between the methodological and applied literature by summarizing new findings of the empirical and statistical literature and practical guidelines for future applied research. In contrast to previous publications this study does not only focus on the estimation of causal effects, but stresses that the balancing challenge can and should be discussed independent of question of causal identification of treatment effects on most empirical applications. Following a brief outline of the practical implementation steps required for PSM and IPW, these steps are presented in detail chronologically, outlining practical advice for each step. Subsequently, the topics of effect estimation, inference, sensitivity analysis and the combination with parametric estimation methods are discussed. Finally, new extensions of the methodology and avenues for future research are presented.
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
Previous studies on the acquisition of verb inflection in normally developing children have revealed an astonishing pattern: children use correctly inflected verbs in their own speech but fail to make use of verb inflections when comprehending sentences uttered by others. Thus, a three-year old might well be able to say something like ‘The cat sleeps on the bed’, but fails to understand that the same sentence, when uttered by another person, refers to only one sleeping cat but not more than one. The previous studies that have examined children's comprehension of verb inflections have employed a variant of a picture selection task in which the child was asked to explicitly indicate (via pointing) what semantic meaning she had inferred from the test sentence. Recent research on other linguistic structures, such as pronouns or focus particles, has indicated that earlier comprehension abilities can be found when methods are used that do not require an explicit reaction, like preferential looking tasks. This dissertation aimed to examine whether children are truly not able to understand the connection the the verb form and the meaning of the sentence subject until the age of five years or whether earlier comprehension can be found when a different measure, preferential looking, is used. Additionally, children's processing of subject-verb agreement violations was examined. The three experiments of this thesis that examined children's comprehension of verb inflections revealed the following: German-speaking three- to four-year old children looked more to a picture showing one actor when hearing a sentence with a singular inflected verb but only when their eye gaze was tracked and they did not have to perform a picture selection task. When they were asked to point to the matching picture, they performed at chance-level. This pattern indicates asymmetries in children's language performance even within the receptive modality. The fourth experiment examined sensitivity to subject-verb agreement violations and did not reveal evidence for sensitivity toward agreement violations in three- and four-year old children, but only found that children's looking patterns were influenced by the grammatical violations at the age of five. The results from these experiments are discussed in relation to the existence of a production-comprehension asymmetry in the use of verb inflections and children's underlying grammatical knowledge.
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
Processes having the same bridges as a given reference Markov process constitute its reciprocal class. In this paper we study the reciprocal class of compound Poisson processes whose jumps belong to a finite set A in R^d. We propose a characterization of the reciprocal class as the unique set of probability measures on which a family of time and space transformations induces the same density, expressed in terms of the reciprocal invariants. The geometry of A plays a crucial role in the design of the transformations, and we use tools from discrete geometry to obtain an optimal characterization. We deduce explicit conditions for two Markov jump processes to belong to the same class. Finally, we provide a natural interpretation of the invariants as short-time asymptotics for the probability that the reference process makes a cycle around its current state.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
The contractile vacuole (CV) is an osmoregulatory organelle found exclusively in algae and protists. In addition to expelling excessive water out of the cell, it also expels ions and other metabolites and thereby contributes to the cell's metabolic homeostasis. The interest in the CV reaches beyond its immediate cellular roles. The CV's function is tightly related to basic cellular processes such as membrane dynamics and vesicle budding and fusion; several physiological processes in animals, such as synaptic neurotransmission and blood filtration in the kidney, are related to the CV's function; and several pathogens, such as the causative agents of sleeping sickness, possess CVs, which may serve as pharmacological targets. The green alga Chlamydomonas reinhardtii has two CVs. They are the smallest known CVs in nature, and they remain relatively untouched in the CV-related literature. Many genes that have been shown to be related to the CV in other organisms have close homologues in C. reinhardtii. We attempted to silence some of these genes and observe the effect on the CV. One of our genes, VMP1, caused striking, severe phenotypes when silenced. Cells exhibited defective cytokinesis and aberrant morphologies. The CV, incidentally, remained unscathed. In addition, mutant cells showed some evidence of disrupted autophagy. Several important regulators of the cell cycle as well as autophagy were found to be underexpressed in the mutant. Lipidomic analysis revealed many meaningful changes between wild-type and mutant cells, reinforcing the compromised-autophagy observation. VMP1 is a singular protein, with homologues in numerous eukaryotic organisms (aside from fungi), but usually with no relatives in each particular genome. Since its first characterization in 2002 it has been associated with several cellular processes and functions, namely autophagy, programmed cell-death, secretion, cell adhesion, and organelle biogenesis. It has been implicated in several human diseases: pancreatitis, diabetes, and several types of cancer. Our results reiterate some of the observations in VMP1's six reported homologues, but, importantly, show for the first time an involvement of this protein in cell division. The mechanisms underlying this involvement in Chlamydomonas, as well as other key aspects, such as VMP1's subcellular localization and interaction partners, still await elucidation.
Cloud security mechanisms
(2014)
Cloud computing has brought great benefits in cost and flexibility for provisioning services. The greatest challenge of cloud computing remains however the question of security. The current standard tools in access control mechanisms and cryptography can only partly solve the security challenges of cloud infrastructures. In the recent years of research in security and cryptography, novel mechanisms, protocols and algorithms have emerged that offer new ways to create secure services atop cloud infrastructures. This report provides introductions to a selection of security mechanisms that were part of the "Cloud Security Mechanisms" seminar in summer term 2013 at HPI.
We consider a general class of finite dimensional deterministic dynamical systems with finitely many local attractors each of which supports a unique ergodic probability measure, which includes in particular the class of Morse–Smale systems in any finite dimension. The dynamical system is perturbed by a multiplicative non-Gaussian heavytailed Lévy type noise of small intensity ε > 0. Specifically we consider perturbations leading to a Itô, Stratonovich and canonical (Marcus) stochastic differential equation. The respective asymptotic first exit time and location problem from each of the domains of attractions in case of inward pointing vector fields in the limit of ε-> 0 has been investigated by the authors. We extend these results to domains with characteristic boundaries and show that the perturbed system exhibits a metastable behavior in the sense that there exits a unique ε-dependent time scale on which the random system converges to a continuous time Markov chain switching between the invariant measures. As examples we consider α-stable perturbations of the Duffing equation and a chemical system exhibiting a birhythmic behavior.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
Surface displacement at volcanic edifices is related to subsurface processes associated with magma movements, fluid transfers within the volcano edifice and gravity-driven deformation processes. Understanding of associated ground displacements is of importance for assessment of volcanic hazards. For example, volcanic unrest is often preceded by surface uplift, caused by magma intrusion and followed by subsidence, after the withdrawal of magma. Continuous monitoring of the surface displacement at volcanoes therefore might allow the forecasting of upcoming eruptions to some extent. In geophysics, the measured surface displacements allow the parameters of possible deformation sources to be estimated through analytical or numerical modeling. This is one way to improve the understanding of subsurface processes acting at volcanoes. Although the monitoring of volcanoes has significantly improved in the last decades (in terms of technical advancements and number of monitored volcanoes), the forecasting of volcanic eruptions remains puzzling. In this work I contribute towards the understanding of the subsurface processes at volcanoes and thus to the improvement of volcano eruption forecasting. I have investigated the displacement field of Llaima volcano in Chile and of Tendürek volcano in East Turkey by using synthetic aperture radar interferometry (InSAR). Through modeling of the deformation sources with the extracted displacement data, it was possible to gain insights into potential subsurface processes occurring at these two volcanoes that had been barely studied before. The two volcanoes, although of very different origin, composition and geometry, both show a complexity of interacting deformation sources. At Llaima volcano, the InSAR technique was difficult to apply, due to the large decorrelation of the radar signal between the acquisition of images. I developed a model-based unwrapping scheme, which allows the production of reliable displacement maps at the volcano that I used for deformation source modeling. The modeling results show significant differences in pre- and post-eruptive magmatic deformation source parameters. Therefore, I conjecture that two magma chambers exist below Llaima volcano: a post-eruptive deep one and a shallow one possibly due to the pre-eruptive ascent of magma. Similar reservoir depths at Llaima have been confirmed by independent petrologic studies. These reservoirs are interpreted to be temporally coupled. At Tendürek volcano I have found long-term subsidence of the volcanic edifice, which can be described by a large, magmatic, sill-like source that is subject to cooling contraction. The displacement data in conjunction with high-resolution optical images, however, reveal arcuate fractures at the eastern and western flank of the volcano. These are most likely the surface expressions of concentric ring-faults around the volcanic edifice that show low magnitudes of slip over a long time. This might be an alternative mechanism for the development of large caldera structures, which are so far assumed to be generated during large catastrophic collapse events. To investigate the potential subsurface geometry and relation of the two proposed interacting sources at Tendürek, a sill-like magmatic source and ring-faults, I have performed a more sophisticated numerical modeling approach. The optimum source geometries show, that the size of the sill-like source was overestimated in the simple models and that it is difficult to determine the dip angle of the ring-faults with surface displacement data only. However, considering physical and geological criteria a combination of outward-dipping reverse faults in the west and inward-dipping normal faults in the east seem to be the most likely. Consequently, the underground structure at the Tendürek volcano consists of a small, sill-like, contracting, magmatic source below the western summit crater that causes a trapdoor-like faulting along the ring-faults around the volcanic edifice. Therefore, the magmatic source and the ring-faults are also interpreted to be temporally coupled. In addition, a method for data reduction has been improved. The modeling of subsurface deformation sources requires only a relatively small number of well distributed InSAR observations at the earth’s surface. Satellite radar images, however, consist of several millions of these observations. Therefore, the large amount of data needs to be reduced by several orders of magnitude for source modeling, to save computation time and increase model flexibility. I have introduced a model-based subsampling approach in particular for heterogeneously-distributed observations. It allows a fast calculation of the data error variance-covariance matrix, also supports the modeling of time dependent displacement data and is, therefore, an alternative to existing methods.