Refine
Year of publication
Document Type
- Doctoral Thesis (11)
- Article (2)
- Monograph/Edited Volume (2)
- Habilitation Thesis (2)
- Master's Thesis (2)
Keywords
- Datenanalyse (19) (remove)
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Angehende Physiklehrkräfte sollen im Rahmen ihres Studiums fachliches und fachdidaktisches Wissen erwerben, welches die Gestaltung lernförderlichen Unterrichts ermöglicht. Es ist allerdings empirisch nur wenig geklärt, wie sich dieses Wissen im Laufe des Studiums entwickelt und ob es zur Ausbildung von Handlungsfähigkeiten beiträgt. Um derartige Wirkungsaussagen treffen zu können, müssen Instrumente entwickelt werden, die eine valide Testwertinterpretation zulassen. In diesem Beitrag werden auf Basis von im Projekt Profile-P+ entwickelten Instrumenten Validitätsanalysen zur längsschnittlichen Entwicklung des Professionswissens von Physiklehramtsstudierenden im Verlauf des Bachelorstudiums und ihrer Fähigkeiten zur Planung und Reflexion von Physikunterricht sowie zum Erklären von physikalischen Sachverhalten vor und nach dem Praxissemester dargestellt. Neben Wissenstests kamen standardisierte Performanztests zum Einsatz. Die vorliegenden Ergebnisse sprechen dafür, dass die erhobenen Messwerte im Sinne von Wirkungsaussagen interpretiert werden können.
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
Recurrence plots, a rather promising tool of data analysis, have been introduced by Eckman et al. in 1987. They visualise recurrences in phase space and give an overview about the system's dynamics. Two features have made the method rather popular. Firstly they are rather simple to compute and secondly they are putatively easy to interpret. However, the straightforward interpretation of recurrence plots for some systems yields rather surprising results. For example indications of low dimensional chaos have been reported for stock marked data, based on recurrence plots. In this work we exploit recurrences or ``naturally occurring analogues'' as they were termed by E. Lorenz, to obtain three key results. One of which is that the most striking structures which are found in recurrence plots are hinged to the correlation entropy and the correlation dimension of the underlying system. Even though an eventual embedding changes the structures in recurrence plots considerably these dynamical invariants can be estimated independently of the special parameters used for the computation. The second key result is that the attractor can be reconstructed from the recurrence plot. This means that it contains all topological information of the system under question in the limit of long time series. The graphical representation of the recurrences can also help to develop new algorithms and exploit specific structures. This feature has helped to obtain the third key result of this study. Based on recurrences to points which have the same ``recurrence structure'', it is possible to generate surrogates of the system which capture all relevant dynamical characteristics, such as entropies, dimensions and characteristic frequencies of the system. These so generated surrogates are shadowed by a trajectory of the system which starts at different initial conditions than the time series in question. They can be used then to test for complex synchronisation.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications. Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups. Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
Für ein vertieftes Verständnis der sozialwissenschaftlichen Datenanalyse werden in der vorliegenden Arbeit die Deskriptivstatistik (beschreibende Statistik) und die Inferenzstatistik (schließende Statistik als Instrument zur Charakterisierung der Grundgesamtheit aufgrund einer Stichprobe) integriert dargestellt. Im Zentrum der Darstellung steht die Analyse von Zusammenhängen von Merkmalen in Stichprobe und Grundgesamtheit in Abhängigkeit vom erzielten Messniveau. Anhand verschiedener sozialwissenschaftlicher Fragestellungen werden daher die Messniveaus und Skalierungsverfahren diskutiert, die Verteilung interessierender Merkmale in Stichprobe und Grundgesamtheit, die Grundlagen der Wahrscheinlichkeitstheorie, Auswahlverfahren zur Konstruktion von Stichproben, Schätz- und Testverfahren sowie insbesondere die verschiedenen Konzepte für die Zusammenhangsanalyse von Merkmalen.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.