Refine
Has Fulltext
- yes (473) (remove)
Year of publication
- 2014 (473) (remove)
Document Type
- Article (142)
- Postprint (127)
- Doctoral Thesis (109)
- Monograph/Edited Volume (28)
- Part of Periodical (25)
- Preprint (13)
- Review (11)
- Master's Thesis (8)
- Bachelor Thesis (4)
- Conference Proceeding (4)
Language
Keywords
- prevention (23)
- Gewalt (21)
- Kriminalität (21)
- Nachhaltigkeit (21)
- Prävention (21)
- Rechtsextremismus (21)
- crime (21)
- right-wing extremism (21)
- sustainability (21)
- violence (21)
Institute
- Institut für Chemie (42)
- Humanwissenschaftliche Fakultät (34)
- Bürgerliches Recht (30)
- WeltTrends e.V. Potsdam (27)
- Institut für Physik und Astronomie (26)
- Institut für Biochemie und Biologie (24)
- Institut für Romanistik (22)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Extern (21)
- Department Linguistik (20)
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
The zero-noise limit of differential equations with singular coefficients is investigated for the first time in the case when the noise is a general alpha-stable process. It is proved that extremal solutions are selected and the probability of selection is computed. Detailed analysis of the characteristic function of an exit time form on the half-line is performed, with a suitable decomposition in small and large jumps adapted to the singular drift.
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
The Babylonian Talmud (BT) attributes the idea of committing a transgression for the sake of God to R. Nahman b. Isaac (RNBI). RNBI's statement appears in two parallel sugyot in the BT (Nazir 23a; Horayot 10a). Each sugya has four textual witnesses. By comparing these textual witnesses, this paper will attempt to reconstruct the sugya's earlier (or, what some might term, original) dialectical form, from which the two familiar versions of the text in Nazir and Horayot evolved. This article reveals the specific ways in which, value-laden conceptualizations have a major impact on the Talmud's formulation, as we know it today.
A vector error correction model for the relationship between public debt and inflation in Germany
(2014)
In the paper, the interaction between public debt and inflation including mutual impulse response will be analysed. The European sovereign debt crisis brought once again the focus on the consequences of public debt in combination with an expansive monetary policy for the development of consumer prices. Public deficits can lead to inflation if the money supply is expansive. The high level of national debt, not only in the Euro-crisis countries, and the strong increase in total assets of the European Central Bank, as a result of the unconventional monetary policy, caused fears on inflating national debt. The transmission from public debt to inflation through money supply and long-term interest rate will be shown in the paper. Based on these theoretical thoughts, the variables public debt, consumer price index, money supply m3 and long-term interest rate will be analysed within a vector error correction model estimated by Johansen approach. In the empirical part of the article, quarterly data for Germany from 1991 by 2010 are to be examined.
Das Land des Lächelns zeigt unter Ministerpräsident Abe zunehmend fratzenhaftere Züge. Japans im negativen Sinne „reizvolle“ Politik irritiert nicht nur seine nächsten Nachbarn China und Korea. Auch die USA sind beunruhigt über das provokante Verhalten ihres Partners im Bündnisfalle. Territorialstreitigkeiten drohen zu eskalieren, die Auseinandersetzung mit der wenig ruhmreichen Vergangenheit wird verweigert. Läuft Japan Gefahr, sämtliche Sympathien zu verspielen?
Graphitic carbon nitride, g-C₃N₄, is a promising organic photo-catalyst for a variety of redox reactions. In order to improve its efficiency in a systematic manner, however, a fundamental understanding of the microscopic interaction between catalyst, reactants and products is crucial. Here we present a systematic study of water adsorption on g-C₃N₄ by means of density functional theory and the density functional based tight-binding method as a prerequisite for understanding photocatalytic water splitting. We then analyze this prototypical redox reaction on the basis of a thermodynamic model providing an estimate of the overpotential for both water oxidation and H⁺ reduction. While the latter is found to occur readily upon irradiation with visible light, we derive a prohibitive overpotential of 1.56 eV for the water oxidation half reaction, comparing well with the experimental finding that in contrast to H₂ production O₂ evolution is only possible in the presence of oxidation cocatalysts.
When azobenzene-modified photosensitive polymer films are irradiated with light interference patterns, topographic variations in the film develop that follow the electric field vector distribution resulting in the formation of surface relief grating (SRG). The exact correspondence of the electric field vector orientation in interference pattern in relation to the presence of local topographic minima or maxima of SRG is in general difficult to determine. In my thesis, we have established a systematic procedure to accomplish the correlation between different interference patterns and the topography of SRG. For this, we devise a new setup combining an atomic force microscope and a two-beam interferometer (IIAFM). With this set-up, it is possible to track the topography change in-situ, while at the same time changing polarization and phase of the impinging interference pattern. To validate our results, we have compared two photosensitive materials named in short as PAZO and trimer. This is the first time that an absolute correspondence between the local distribution of electric field vectors of interference pattern and the local topography of the relief grating could be established exhaustively. In addition, using our IIAFM we found that for a certain polarization combination of two orthogonally polarized interfering beams namely SP (↕, ↔) interference pattern, the topography forms SRG with only half the period of the interference patterns. Exploiting this phenomenon we are able to fabricate surface relief structures below diffraction limit with characteristic features measuring only 140 nm, by using far field optics with a wavelength of 491 nm. We have also probed for the stresses induced during the polymer mass transport by placing an ultra-thin gold film on top (5–30 nm). During irradiation, the metal film not only deforms along with the SRG formation, but ruptures in regular and complex manner. The morphology of the cracks differs strongly depending on the electric field distribution in the interference pattern even when the magnitude and the kinetic of the strain are kept constant. This implies a complex local distribution of the opto-mechanical stress along the topography grating. The neutron reflectivity measurements of the metal/polymer interface indicate the penetration of metal layer within the polymer resulting in the formation of bonding layer that confirms the transduction of light induced stresses in the polymer layer to a metal film.
The atmosphere over the Arctic Ocean is strongly influenced by the distribution of sea ice and open water. Leads in the sea ice produce strong convective fluxes of sensible and latent heat and release aerosol particles into the atmosphere. They increase the occurrence of clouds and modify the structure and characteristics of the atmospheric boundary layer (ABL) and thereby influence the Arctic climate.
In the course of this study aircraft measurements were performed over the western Arctic Ocean as part of the campaign PAMARCMIP 2012 of the Alfred Wegener Institute for Polar and Marine Research (AWI). Backscatter from aerosols and clouds within the lower troposphere and the ABL were measured with the nadir pointing Airborne Mobile Aerosol Lidar (AMALi) and dropsondes were launched to obtain profiles of meteorological variables. Furthermore, in situ measurements of aerosol properties, meteorological variables and turbulence were part of the campaign. The measurements covered a broad range of atmospheric and sea ice conditions.
In this thesis, properties of the ABL over Arctic sea ice with a focus on the influence of open leads are studied based on the data from the PAMARCMIP campaign. The height of the ABL is determined by different methods that are applied to dropsonde and AMALi backscatter profiles. ABL heights are compared for different flights representing different conditions of the atmosphere and of sea ice and open water influence. The different criteria for ABL height that are applied show large variation in terms of agreement among each other, depending on the characteristics of the ABL and its history. It is shown that ABL height determination from lidar backscatter by methods commonly used under mid-latitude conditions is applicable to the Arctic ABL only under certain conditions. Aerosol or clouds within the ABL are needed as a tracer for ABL height detection from backscatter. Hence an aerosol source close to the surface is necessary, that is typically found under the present influence of open water and therefore convective conditions. However it is not always possible to distinguish residual layers from the actual ABL. Stable boundary layers are generally difficult to detect.
To illustrate the complexity of the Arctic ABL and processes therein, four case studies are analyzed each of which represents a snapshot of the interplay between atmosphere and underlying sea ice or water surface. Influences of leads and open water on the aerosol and clouds within the ABL are identified and discussed. Leads are observed to cause the formation of fog and cloud layers within the ABL by humidity emission. Furthermore they decrease the stability and increase the height of the ABL and consequently facilitate entrainment of air and aerosol layers from the free troposphere.
NutzerInnen von gewalthaltigen Medien geben einerseits oftmals zu, dass sie fiktionale, gewalthaltige Medien konsumieren, behaupten jedoch gleichzeitig, dass dies nicht ihr Verhalten außerhalb des Medienkontexts beeinflusst. Sie argumentieren, dass sie leicht zwischen Dingen, die im fiktionalen Kontext und Dingen, die in der Realität gelernt wurden, unterscheiden können. Im Kontrast zu diesen Aussagen zeigen Metanalysen Effektstärken im mittleren Bereich für den Zusammenhang zwischen Gewaltmedienkonsum und aggressivem Verhalten. Diese Ergebnisse können nur erklärt werden, wenn MediennutzerInnen gewalthaltige Lernerfahrungen auch außerhalb des Medienkontexts anwenden. Ein Prozess, der Lernerfahrungen innerhalb des Medienkontexts mit dem Verhalten in der realen Welt verknüpft, ist Desensibilisierung, die oftmals eine Reduktion des negativen Affektes gegenüber Gewalt definiert ist. Zur Untersuchung des Desensibilisierungsprozesses wurden vier Experimente durchgeführt. Die erste in dieser Arbeit untersuchte Hypothese war, dass je häufiger Personen Gewaltmedien konsumieren, desto weniger negativen Affekt zeigen sie gegenüber Bildern mit realer Gewalt. Jedoch wurde angenommen, dass diese Bewertung auf Darstellungen von realer Gewalt beschränkt ist und nicht bei Bildern ohne Gewaltbezug, die einen negativen Affekt auslösen, zu finden ist. Die zweite Hypothese bezog sich auf den Affekt während des Konsums von Mediengewalt. Hier wurde angenommen, dass besonders Personen, die Freude an Gewalt in den Medien empfinden weniger negativen Affekt gegenüber realen Gewaltdarstellungen zeigen. Die letzte Hypothese beschäftigte sich mit kognitiver Desensibilisierung und sagte vorher, dass Gewaltmedienkonsum zu einem Transfer von Reaktionen, die normalerweise gegenüber gewalthaltigen Reizen gezeigt werden, auf ursprünglich neutrale Reize führt. Das erste Experiment (N = 57) untersuchte, ob die habituelle Nutzung von gewalthaltigen Medien den selbstberichteten Affekt (Valenz und Aktivierung) gegenüber Darstellungen von realer Gewalt und nichtgewalthaltigen Darstellungen, die negativen Affekt auslösen, vorhersagt. Die habituelle Nutzung von gewalthaltigen Medien sagte weniger negative Valenz und weniger allgemeine Aktivierung gegenüber gewalthalten und nichtgewalthaltigen Bildern vorher. Das zweite Experiment (N = 103) untersuchte auch die Beziehung zwischen habituellem Gewaltmedienkonsum und den affektiven Reaktionen gegenüber Bildern realer Gewalt und negativen affektauslösenden Bildern. Als weiterer Prädiktor wurde der Affekt beim Betrachten von gewalthaltigen Medien hinzugefügt. Der Affekt gegenüber den Bildern wurde zusätzlich durch psychophysiologische Maße (Valenz: C: Supercilii; Aktivierung: Hautleitreaktion) erhoben. Wie zuvor sagte habitueller Gewaltmedienkonsum weniger selbstberichte Erregung und weniger negative Valenz für die gewalthaltigen und die negativen, gewalthaltfreien Bilder vorher. Die physiologischen Maßen replizierten dieses Ergebnis. Jedoch zeigte sich ein anderes Muster für den Affekt beim Konsum von Gewalt in den Medien. Personen, die Gewalt in den Medien stärker erfreut, zeigen eine Reduktion der Responsivität gegenüber Gewalt auf allen vier Maßen. Weiterhin war bei drei dieser vier Maße (selbstberichte Valenz, Aktivität des C. Supercilii und Hautleitreaktion) dieser Zusammenhang auf die gewalthaltigen Bilder beschränkt, mit keinem oder nur einem kleinen Effekt auf die negativen, aber nichtgewalthaltigen Bilder. Das dritte Experiment (N = 73) untersuchte den Affekt während die Teilnehmer ein Computerspiel spielten. Das Spiel wurde eigens für dieses Experiment programmiert, sodass einzelne Handlungen im Spiel mit der Aktivität des C. Supercilii, dem Indikator für negativen Affekt, in Bezug gesetzt werden konnten. Die Analyse des C. Supercilii zeigte, dass wiederholtes Durchführen von aggressiven Spielzügen zu einem Rückgang von negativen Affekt führte, der die aggressiven Spielhandlungen begleitete. Der negative Affekt während gewalthaltiger Spielzüge wiederum sagte die affektive Reaktion gegenüber Darstellungen von gewalthaltigen Bildern vorher, nicht jedoch gegenüber den negativen Bildern. Das vierte Experiment (N = 77) untersuchte kognitive Desensibilisierung, die die Entwicklung von Verknüpfungen zwischen neutralen und aggressiven Kognitionen beinhaltete. Die Teilnehmer spielten einen Ego-Shooter entweder auf einem Schiff- oder einem Stadtlevel. Die Beziehung zwischen den neutralen Konstrukten (Schiff/Stadt) und den aggressiven Kognitionen wurde mit einer lexikalischen Entscheidungsaufgabe gemessen. Das Spielen im Schiff-/Stadt-Level führte zu einer kürzen Reaktionszeit für aggressive Wörter, wenn sie einem Schiff- bzw. Stadtprime folgten. Dies zeigte, dass die im Spiel enthaltenen neutralen Konzepte mit aggressiven Knoten verknüpft werden. Die Ergebnisse dieser vier Experimente wurden diskutiert im Rahmen eines lerntheoretischen Ansatzes um Desensibilisierung zu konzeptualisieren.
Afghanistan und die Region
(2014)
Der Afghanistankonflikt hat seit 2001 deutliche Auswirkungen auf das regionale Umfeld – in Pakistan, Kaschmir, Xinjiang und den zentralasiatischen Republiken. Dies wird sich nach dem Abzug der ISAF-Truppen noch verstärken. Dabei geht es sowohl um die grenzüberschreitenden Folgen der beiden Militärinterventionen als auch um die Wirkungen der innerafghanischen Konflikte auf die gesamte Region. Diese Problematik besitzt ein erhebliches Konfliktpotenzial, das größere Aufmerksamkeit verdient.
Afghanistan und Zentralasien
(2014)
In den gegenwärtigen Prozessen in Afghanistan gewinnen die Beziehungen zwischen Afghanistan und seinen Nachbarn in Zentralasien an Bedeutung. Ihre weitere Entwicklung wird einerseits von der Transformation in Afghanistan und andererseits von der Politik der zentralasiatischen Staaten abhängen. Während sich das Drogenproblem erschwerend auswirkt, gibt es einige ermutigende Ansätze im Bereich der wirtschaftlichen Kooperation.
In the aftermath of the severe flooding in Central Europe in August 2002, a number of changes in flood policies were launched in Germany and other European countries, aiming at improved risk management. The question arises as to whether these changes have already had an impact on the residents' ability to cope with floods, and whether flood-affected private households are now better prepared than they were in 2002. Therefore, computer-aided telephone interviews with private households in Germany that suffered from property damage due to flooding in 2005, 2006, 2010 or 2011 were performed and analysed with respect to flood awareness, precaution, preparedness and recovery. The data were compared to a similar investigation conducted after the flood in 2002.
After the flood in 2002, the level of private precautions taken increased considerably. One contributing factor is the fact that, in general, a larger proportion of people knew that they were at risk of flooding. The best level of precaution was found before the flood events in 2006 and 2011. The main reason for this might be that residents had more experience with flooding than residents affected in 2005 or 2010. Yet, overall, flood experience and knowledge did not necessarily result in building retrofitting or flood-proofing measures, which are considered as mitigating damages most effectively. Hence, investments still need to be stimulated in order to reduce future damage more efficiently.
Early warning and emergency responses were substantially influenced by flood characteristics. In contrast to flood-affected people in 2006 or 2011, people affected by flooding in 2005 or 2010 had to deal with shorter lead times and therefore had less time to take emergency measures. Yet, the lower level of emergency measures taken also resulted from the people's lack of flood experience and insufficient knowledge of how to protect themselves. Overall, it was noticeable that these residents suffered from higher losses. Therefore, it is important to further improve early warning systems and communication channels, particularly in hilly areas with rapid-onset flooding.
Auf welche Weise sich unsere Einschätzung des wissenschaftlichen Wirkens Alexander von Humboldts auf Grundlage der beständig voranschreitenden Erschließung seiner Handschriften verändern konnte und welche aktuellen wissenschaftshistorischen Fragestellungen dadurch aufgeworfen wurden, verdeutlicht Oliver Schwarz an Beispielen aus der Astronomie.
Auf seiner russisch-sibirischen Reise im Jahre 1829 berührte Alexander von Humboldt zweimal Ust’-Kamenogorsk (heute kasachisch Öskemen), die kleine Stadt am Irtyš, an der Südgrenze des Russischen Reiches. Christian Suckow beschreibt Humboldts kurzen, zweitägigen Aufenthalt an dem Ort, der Ausgangs- und Endpunkt der Exkursion zur Silbergrube Zyrjanovsk im südwestlichen Altai und nach Baty an die chinesische Grenze war.
Pierre-Simon Marquis de Laplace joua un rôle éminent dans la vie scientifique d’Alexandre de Humboldt. Humboldt avait fait la connaissance du savant français qui avait vingt ans de plus que lui-même à Paris en 1798. L’article de Eberhard Knobloch examine la relation entre ces deux géants de la science en s’appuyant entre autre pour la première fois sur des documents inédits: les quatre lettres de Laplace à Humboldt, le journal d’Humboldt et sur le matériel d’archives conservé aux Archives de l’Académie des Sciences de Berlin-Brandebourg.
Boolean constraint solving technology has made tremendous progress over the last decade, leading to industrial-strength solvers, for example, in the areas of answer set programming (ASP), the constraint satisfaction problem (CSP), propositional satisfiability (SAT) and satisfiability of quantified Boolean formulas (QBF). However, in all these areas, there exist multiple solving strategies that work well on different applications; no strategy dominates all other strategies. Therefore, no individual solver shows robust state-of-the-art performance in all kinds of applications. Additionally, the question arises how to choose a well-performing solving strategy for a given application; this is a challenging question even for solver and domain experts. One way to address this issue is the use of portfolio solvers, that is, a set of different solvers or solver configurations. We present three new automatic portfolio methods: (i) automatic construction of parallel portfolio solvers (ACPP) via algorithm configuration,(ii) solving the $NP$-hard problem of finding effective algorithm schedules with Answer Set Programming (aspeed), and (iii) a flexible algorithm selection framework (claspfolio2) allowing for fair comparison of different selection approaches. All three methods show improved performance and robustness in comparison to individual solvers on heterogeneous instance sets from many different applications. Since parallel solvers are important to effectively solve hard problems on parallel computation systems (e.g., multi-core processors), we extend all three approaches to be effectively applicable in parallel settings. We conducted extensive experimental studies different instance sets from ASP, CSP, MAXSAT, Operation Research (OR), SAT and QBF that indicate an improvement in the state-of-the-art solving heterogeneous instance sets. Last but not least, from our experimental studies, we deduce practical advice regarding the question when to apply which of our methods.
Amtsenthebung und andere Disziplinarmaßnahmen gegen Richter in den Vereinigten Staaten von Amerika
(2014)
These lecture notes are intended as a short introduction to diffusion processes on a domain with a reflecting boundary for graduate students, researchers in stochastic analysis and interested readers. Specific results on stochastic differential equations with reflecting boundaries such as existence and uniqueness, continuity and Markov properties, relation to partial differential equations and submartingale problems are given. An extensive list of references to current literature is included. This book has its origins in a mini-course the author gave at the University of Potsdam and at the Technical University of Berlin in Winter 2013.
Surface displacement at volcanic edifices is related to subsurface processes associated with magma movements, fluid transfers within the volcano edifice and gravity-driven deformation processes. Understanding of associated ground displacements is of importance for assessment of volcanic hazards. For example, volcanic unrest is often preceded by surface uplift, caused by magma intrusion and followed by subsidence, after the withdrawal of magma. Continuous monitoring of the surface displacement at volcanoes therefore might allow the forecasting of upcoming eruptions to some extent. In geophysics, the measured surface displacements allow the parameters of possible deformation sources to be estimated through analytical or numerical modeling. This is one way to improve the understanding of subsurface processes acting at volcanoes. Although the monitoring of volcanoes has significantly improved in the last decades (in terms of technical advancements and number of monitored volcanoes), the forecasting of volcanic eruptions remains puzzling. In this work I contribute towards the understanding of the subsurface processes at volcanoes and thus to the improvement of volcano eruption forecasting. I have investigated the displacement field of Llaima volcano in Chile and of Tendürek volcano in East Turkey by using synthetic aperture radar interferometry (InSAR). Through modeling of the deformation sources with the extracted displacement data, it was possible to gain insights into potential subsurface processes occurring at these two volcanoes that had been barely studied before. The two volcanoes, although of very different origin, composition and geometry, both show a complexity of interacting deformation sources. At Llaima volcano, the InSAR technique was difficult to apply, due to the large decorrelation of the radar signal between the acquisition of images. I developed a model-based unwrapping scheme, which allows the production of reliable displacement maps at the volcano that I used for deformation source modeling. The modeling results show significant differences in pre- and post-eruptive magmatic deformation source parameters. Therefore, I conjecture that two magma chambers exist below Llaima volcano: a post-eruptive deep one and a shallow one possibly due to the pre-eruptive ascent of magma. Similar reservoir depths at Llaima have been confirmed by independent petrologic studies. These reservoirs are interpreted to be temporally coupled. At Tendürek volcano I have found long-term subsidence of the volcanic edifice, which can be described by a large, magmatic, sill-like source that is subject to cooling contraction. The displacement data in conjunction with high-resolution optical images, however, reveal arcuate fractures at the eastern and western flank of the volcano. These are most likely the surface expressions of concentric ring-faults around the volcanic edifice that show low magnitudes of slip over a long time. This might be an alternative mechanism for the development of large caldera structures, which are so far assumed to be generated during large catastrophic collapse events. To investigate the potential subsurface geometry and relation of the two proposed interacting sources at Tendürek, a sill-like magmatic source and ring-faults, I have performed a more sophisticated numerical modeling approach. The optimum source geometries show, that the size of the sill-like source was overestimated in the simple models and that it is difficult to determine the dip angle of the ring-faults with surface displacement data only. However, considering physical and geological criteria a combination of outward-dipping reverse faults in the west and inward-dipping normal faults in the east seem to be the most likely. Consequently, the underground structure at the Tendürek volcano consists of a small, sill-like, contracting, magmatic source below the western summit crater that causes a trapdoor-like faulting along the ring-faults around the volcanic edifice. Therefore, the magmatic source and the ring-faults are also interpreted to be temporally coupled. In addition, a method for data reduction has been improved. The modeling of subsurface deformation sources requires only a relatively small number of well distributed InSAR observations at the earth’s surface. Satellite radar images, however, consist of several millions of these observations. Therefore, the large amount of data needs to be reduced by several orders of magnitude for source modeling, to save computation time and increase model flexibility. I have introduced a model-based subsampling approach in particular for heterogeneously-distributed observations. It allows a fast calculation of the data error variance-covariance matrix, also supports the modeling of time dependent displacement data and is, therefore, an alternative to existing methods.
Metabolic systems tend to exhibit steady states that can be measured in terms of their concentrations and fluxes. These measurements can be regarded as a phenotypic representation of all the complex interactions and regulatory mechanisms taking place in the underlying metabolic network. Such interactions determine the system's response to external perturbations and are responsible, for example, for its asymptotic stability or for oscillatory trajectories around the steady state. However, determining these perturbation responses in the absence of fully specified kinetic models remains an important challenge of computational systems biology. Structural kinetic modeling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a parameterised representation of the system's Jacobian matrix in which the model parameters encode information about the enzyme-metabolite interactions. Stability criteria can be derived by generating a large number of structural kinetic models (SK-models) with randomly sampled parameter sets and evaluating the resulting Jacobian matrices. The parameter space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Because the sampled parameters are equivalent to the elasticities used in metabolic control analysis (MCA), the results are easy to interpret biologically. In this project, the SKM framework was extended by several novel methodological improvements. These improvements were evaluated in a simulation study using a set of small example pathways with simple Michaelis Menten rate laws. Afterwards, a detailed analysis of the dynamic properties of the neuronal TCA cycle was performed in order to demonstrate how the new insights obtained in this work could be used for the study of complex metabolic systems. The first improvement was achieved by examining the biological feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, the findings showed that the majority of sampled SK-models would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion was formulated that mitigates such infeasible models and the application of this criterion changed the conclusions of the SKM experiment. The second improvement of this work was the application of supervised machine-learning approaches in order to analyse SKM experiments. So far, SKM experiments have focused on the detection of individual enzymes to identify single reactions important for maintaining the stability or oscillatory trajectories. In this work, this approach was extended by demonstrating how SKM enables the detection of ensembles of enzymes or metabolites that act together in an orchestrated manner to coordinate the pathways response to perturbations. In doing so, stable and unstable states served as class labels, and classifiers were trained to detect elasticity regions associated with stability and instability. Classification was performed using decision trees and relevance vector machines (RVMs). The decision trees produced good classification accuracy in terms of model bias and generalizability. RVMs outperformed decision trees when applied to small models, but encountered severe problems when applied to larger systems because of their high runtime requirements. The decision tree rulesets were analysed statistically and individually in order to explore the role of individual enzymes or metabolites in controlling the system's trajectories around steady states. The third improvement of this work was the establishment of a relationship between the SKM framework and the related field of MCA. In particular, it was shown how the sampled elasticities could be converted to flux control coefficients, which were then investigated for their predictive information content in classifier training. After evaluation on the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle with respect to their intrinsic mechanisms responsible for stability or instability. The findings showed that several elasticities were jointly coordinated to control stability and that the main source for potential instabilities were mutations in the enzyme alpha-ketoglutarate dehydrogenase.
Durch die immer stärker werdende Flut an digitalen Informationen basieren immer mehr Anwendungen auf der Nutzung von kostengünstigen Cloud Storage Diensten. Die Anzahl der Anbieter, die diese Dienste zur Verfügung stellen, hat sich in den letzten Jahren deutlich erhöht. Um den passenden Anbieter für eine Anwendung zu finden, müssen verschiedene Kriterien individuell berücksichtigt werden. In der vorliegenden Studie wird eine Auswahl an Anbietern etablierter Basic Storage Diensten vorgestellt und miteinander verglichen. Für die Gegenüberstellung werden Kriterien extrahiert, welche bei jedem der untersuchten Anbieter anwendbar sind und somit eine möglichst objektive Beurteilung erlauben. Hierzu gehören unter anderem Kosten, Recht, Sicherheit, Leistungsfähigkeit sowie bereitgestellte Schnittstellen. Die vorgestellten Kriterien können genutzt werden, um Cloud Storage Anbieter bezüglich eines konkreten Anwendungsfalles zu bewerten.
ANG-2 for quantitative Na+ determination in living cells by time-resolved fluorescence microscopy
(2014)
Sodium ions (Na+) play an important role in a plethora of cellular processes, which are complex and partly still unexplored. For the investigation of these processes and quantification of intracellular Na+ concentrations ([Na+]i), two-photon coupled fluorescence lifetime imaging microscopy (2P-FLIM) was performed in the salivary glands of the cockroach Periplaneta americana. For this, the novel Na+-sensitive fluorescent dye Asante NaTRIUM Green-2 (ANG-2) was evaluated, both in vitro and in situ. In this context, absorption coefficients, fluorescence quantum yields and 2P action cross-sections were determined for the first time. ANG-2 was 2P-excitable over a broad spectral range and displayed fluorescence in the visible spectral range. Although the fluorescence decay behaviour of ANG-2 was triexponential in vitro, its analysis indicates a Na+-sensitivity appropriate for recordings in living cells. The Na+-sensitivity was reduced in situ, but the biexponential fluorescence decay behaviour could be successfully analysed in terms of quantitative [Na+]i recordings. Thus, physiological 2P-FLIM measurements revealed a dopamine-induced [Na+]i rise in cockroach salivary gland cells, which was dependent on a Na+-K+-2Cl− cotransporter (NKCC) activity. It was concluded that ANG-2 is a promising new sodium indicator applicable for diverse biological systems.
Modern microscopic techniques following the stochastic motion of labelled tracer particles have uncovered significant deviations from the laws of Brownian motion in a variety of animate and inanimate systems. Such anomalous diffusion can have different physical origins, which can be identified from careful data analysis. In particular, single particle tracking provides the entire trajectory of the traced particle, which allows one to evaluate different observables to quantify the dynamics of the system under observation. We here provide an extensive overview over different popular anomalous diffusion models and their properties. We pay special attention to their ergodic properties, highlighting the fact that in several of these models the long time averaged mean squared displacement shows a distinct disparity to the regular, ensemble averaged mean squared displacement. In these cases, data obtained from time averages cannot be interpreted by the standard theoretical results for the ensemble averages. Here we therefore provide a comparison of the main properties of the time averaged mean squared displacement and its statistical behaviour in terms of the scatter of the amplitudes between the time averages obtained from different trajectories. We especially demonstrate how anomalous dynamics may be identified for systems, which, on first sight, appear to be Brownian. Moreover, we discuss the ergodicity breaking parameters for the different anomalous stochastic processes and showcase the physical origins for the various behaviours. This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion.
Die jüdische Reformbewegung veränderte nicht nur den liturgischen Ablauf des Gottesdienstes, sondern wirkte sich auch auf das Synagogengebäude aus, in das nun Orgel, Chor und Predigtkanzel als neue Elemente integriert wurden. Nach dem Seesener Jacobstempel (1810) adaptierte man die neuen Ideen in Berlin und anderen Städten, so dass eine eigene Typologie von Reformsynagogen entstand. Ende des 19. Jahrhunderts repräsentierten Synagogen deutlich den Integrationswillen der jüdischen Gemeinden. Die vielbeachteten Wettbewerbsbeiträge für neue großstädtische Synagogenbauten zeigten die unterschiedlichen Möglichkeiten zur Einbeziehung von Orgel und Chor im Innenraum. Die vorgestellten Beispiele führen so die allgemeine Entwicklung der Synagogenarchitektur und die verschiedenartigen Ausprägungen der „Orgelsynagoge“ im Besonderen exemplarisch vor und zeigen, wie die musikalisch durchkomponierte Liturgie mit der neuen „Komposition“ des Synagogenraumes korrespondierte.
Arsenic-containing hydrocarbons (AsHC) constitute one group of arsenolipids that have been identified in seafood. In this first in vivo toxicity study for AsHCs, we show that AsHCs exert toxic effects in Drosophila melanogaster in a concentration range similar to that of arsenite. In contrast to arsenite, however, AsHCs cause developmental toxicity in the late developmental stages of Drosophila melanogaster. This work illustrates the need for a full characterisation of the toxicity of AsHCs in experimental animals to finally assess the risk to human health related to the presence of arsenolipids in seafood.
Vor allem leer. Und doch ist der Negev gespickt mit Siedlungen und Städten. Etwa 800.000 Menschen leben hier von Landwirtschaft, Schwerindustrie, Tourismus und Militär. Wie vor Jahrtausenden siedeln dort Beduinen. Netanjahus Regierung will 40.000 von ihnen umsiedeln. Das führte im Dezember 2013 zum „Tag des Zorns“ von Beduinen, Arabern und Palästinensern.
Ausprägungen räumlicher Identität in ehemaligen sudetendeutschen Gebieten der Tschechischen Republik
(2014)
Das tschechische Grenzgebiet ist eine der Regionen in Europa, die in der Folge des Zweiten Weltkrieges am gravierendsten von Umbrüchen in der zuvor bestehenden Bevölkerungsstruktur betroffen waren. Der erzwungenen Aussiedlung eines Großteils der ansässigen Bevölkerung folgten die Neubesiedlung durch verschiedenste Zuwanderergruppen sowie teilweise langanhaltende Fluktuationen der Einwohnerschaft. Die Stabilisierung der Bevölkerung stand sodann unter dem Zeichen der sozialistischen Gesellschafts- und Wirtschaftsordnung, die die Lebensweise und Raumwahrnehmung der neuen Einwohner nachhaltig prägte. Die Grenzöffnung von 1989, die politische Transformation sowie die Integration der Tschechischen Republik in die Europäische Union brachten neue demographische und sozioökonomische Entwicklungen mit sich. Sie schufen aber auch die Bedingungen dafür, sich neu und offen auch mit der spezifischen Geschichte des ehemaligen Sudetenlandes sowie mit dem Zustand der gegenwärtigen Gesellschaft in diesem Gebiet auseinanderzusetzen.
Im Rahmen der vorliegenden Arbeit wird anhand zweier Beispielregionen untersucht, welche Raumvorstellungen und Raumbindungen bei der heute in den ehemaligen sudetendeutschen Gebieten ansässigen Bevölkerung vorhanden sind und welche Einflüsse die unterschiedlichen raumstrukturellen Bedingungen darauf ausüben. Besonderes Augenmerk wird auf die soziale Komponente der Ausprägung räumlicher Identität gelegt, das heißt auf die Rolle von Bedeutungszuweisungen gegenüber Raumelementen im Rahmen sozialer Kommunikation und Interaktion. Dies erscheint von besonderer Relevanz in einem Raum, der sich durch eine gewisse Heterogenität seiner Einwohnerschaft hinsichtlich ihres ethnischen, kulturellen beziehungsweise biographischen Hintergrundes auszeichnet. Schließlich wird ermittelt, welche Impulse unter Umständen von einer ausgeprägten räumlichen Identität für die Entwicklung des Raumes ausgehen.
Die vorliegende Veröffentlichung gibt die Ergebnisse einer Befragung von 1247 Studierenden der Universität Potsdam zu deren Mediennutzungsgewohnheiten wieder. Von besonderem Interesse war die Nutzung digitaler Medien im Kontext des Studiums. Die Untersuchung basiert auf einer Reihe gleichartiger Forschungsarbeiten des Karlsruher Instituts für Technologie (KIT), die an mehreren deutschen Hochschulen durchgeführt wurde.
Ausências Brasil
(2014)
Von der Militärdiktatur ermordet und spurlos verschwunden – diese Ausstellung greift zurück auf Fotoalben der Familienangehörigen von Brasilianern, die der systematischen Repression, Folter und Verschleppung der brasilianischen Militärdiktatur (1964–1985) zum Opfer gefallen sind: Arbeiter, Stadtguerilleros, Studenten, Akademiker, ganze Familien.
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this thesis I developed a picking free automated method based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. I used the STA/LTA of several characteristic functions in order to increase the sensitiveness to the P wave and the S waves. For the P phases we use the STA/LTA traces of the vertical energy function, while for the S phases, we use the STA/LTA traces of the horizontal energy trace and then a more optimized characteristic function which is obtained using the principal component analysis technique. The orientation of the horizontal components can be retrieved by robust and linear approach of waveform comparison between stations within a network using seismic sources outside the network (chapter 2). To locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial and temporal coordinates of the seismic event. Location uncertainties are then estimated by perturbing the STA/LTA parameters (i.e the length of both long and short time windows) and relocating each event several times. In order to test the location method I firstly applied it to a set of 200 synthetic events. Then we applied it to two different real datasets. A first one related to mining induced microseismicity in a coal mine in the northern Germany (chapter 3). In this case we successfully located 391 microseismic event with magnitude range between 0.5 and 2.0 Ml. To further validate the location method I compared the retrieved locations with those obtained by manual picking procedure. The second dataset consist in a pilot application performed in the Campania-Lucania region (southern Italy) using a 33 stations seismic network (Irpinia Seismic Network) with an aperture of about 150 km (chapter 4). We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < Ml < 2.7. A subset of these locations were compared with accurate locations retrieved by a manual location procedure based on the use of a double difference technique. In both cases results indicate good agreement with manual locations. Moreover, the waveform stacking location method results noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
Der vorliegende Beitrag befasst sich mit der Konstruktion eines Lehr-/ Lernszenarios polyvalenter Grundlagenvorlesungen in naturwissenschaftlichen Fachwissenschaften. Das Szenario verbindet klassische Vorlesungen mit virtuellen Elementen wie Online-Kursen, Online-Foren und Audience-Response-Systemen sowie dem Arbeiten in Kleingruppen mit Ansätzen des problemorientierten Lernens. Ziel ist es das Grundlagenwissen der Studierenden anzupassen, das Arbeiten in Gruppen zu fördern und problemorientiertes Lernen zu erlernen.
Der vorliegende Text gibt eine Bestandserhebung der bisher stattgefundenen Aktivitäten im E-Learning an der Universität Potsdam wieder, andererseits dient er auch dazu, Potenziale zu sichten und in einem nächstem Schritt daraus Ideen und Vorschläge für eine hochschulweite E-Learning-Strategie abzuleiten. Zielsetzung der Bestandsaufnahme ist es, die relevanten Informationen darzustellen, über den Platz der Universität Potsdam in der hochschulischen E-Learning-Landschaft zu orientieren und den Stand der Entwicklung zu bewerten.
Bewegunglesen.com
(2014)
bewegunglesen.com (mit Silber bei den Best of Swiss Web Awards 2013 ausgezeichnet) ist ein E-Learning-Tool und bietet für Sportunterrichtende und Studierende eine webbasierte, interaktive Übungsgelegenheit, die Bewegungsanalyse und das kriteriengeleitete Verbessern von Fertigkeiten zu erlernen. Bewegungsabläufe mit ihren Kernbewegungen werden praxisnah und schulstufengerecht vermittelt. Daneben können auch Unterrichtsvideos hochgeladen, geschnitten, durch Grafiken und Fakten angereichert und innerhalb der Community geteilt werden. Aus den Clips lassen sich Übungen und Prüfungen mit Beurteilungskriterien des Bewegungsablaufs zusammenstellen, welche automatisiert ausgewertet werden.
Enthüllungen über groß angelegte NSA-Lauschangriffe auf die Bundesrepublik, die auch vor dem Mobiltelefon der Bundeskanzlerin nicht haltmachten, haben mit neuer Intensität nicht nur die Frage nach dem deutsch-amerikanischen Verhältnis auf die Tagesordnung gesetzt. Bedeutet diese Massenspionage, dass Grundrechte in Deutschland von auswärtigen Diensten umstandslos außer Kraft gesetzt werden können? Oder ist sie der Vorbote eines aufziehenden Hegemonialkonflikts zwischen der EU und den USA?
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.