### Refine

#### Year of publication

#### Document Type

- Article (7)
- Postprint (3)
- Monograph/Edited Volume (2)
- Doctoral Thesis (1)

#### Language

- English (13)

#### Is part of the Bibliography

- yes (13)

#### Keywords

- event coincidence analysis (3)
- climate-change (2)
- trends (2)
- weather extremes (2)
- African climate (1)
- Central Great-Plains (1)
- Grain-size distributions (1)
- Korngrößenverteilungen (1)
- Multivariate Statistics (1)
- Multivariate Statistik (1)

The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.

Towards a better understanding of laser beam melt ablation using methods of statistical analysis
(2002)

Laser beam melt ablation, as a contact free machining process, offers several advantages compared to conventional processing mechanisms. Although the idea behind it is rather simple, the process has a major limitation: with increasing ablation rate surface quality of the workpiece processed declines rapidly. The structures observed show a clear dependence of the line energy. In dependence of this parameter several regimes of the process have been separated. These are clearly distinguishable as well in the surfaces obtained as in the signals gained by the measurement of the process emissions which is the observed quantity chosen.

Laser beam melt ablation - a contact-free machining process - offers several advantages compared to conventional processing mechanisms: there exists no tool wear and even extremely hard or brittle materials can be processed. During ablation the workpiece is molten by a CO2-laser beam, this melt is then driven out by the impulse of a process gas. The idea behind laser ablation is rather simple, but it has a major limitation in practical applications: with increasing ablation rates surface quality of the workpiece processed declines rapidly. At high ablation rates, depending on the process parameters different periodic-like structures can be observed on the ablated surface. These structures show a dependence on the line energy, which has been identified as a fundamental control parameter. In dependence on this parameter several regimes with different behaviours of the process have been separated. These regimes are distinguishable as well in the surfaces obtained as in the signals gained by the measurement of the process emissions. Further aim is to identify the different modes of the system and reach a deeper understanding of the dynamics of the molten material in order to understand the formation of these surface structures. With this it should be possible to influence the system in the direction of avoiding structure formation even at high ablation rates. Relying on the results on-line monitoring and control of the process should be studied.

Motivated by the successful Karlsruhe dynamo experiment, a relatively low-dimensional dynamo model is proposed. It is based on a strong truncation of the magnetohydrodynamic (MHD) equations with an external forcing of the Roberts type and the requirement that the model system satisfies the symmetries of the full MHD system, so that the first symmetry-breaking bifurcations can be captured. The backbone of the Roberts dynamo is formed by the Roberts flow, a helical mean magnetic field and another part of the magnetic field coupled to these two by triadic mode interactions. A minimum truncation model (MTM) containing only these energetically dominating primary mode triads is fully equivalent to the widely used first-order smoothing approximation. However, it is shown that this approach works only in the limit of small wave numbers of the excited magnetic field or small magnetic Reynolds numbers ($Rm ll 1$). To obtain dynamo action under more general conditions, secondary mode

As a non-contact process laser beam melt ablation offers several advantages compared to conventional processing mechanisms. During ablation the surface of the workpiece is molten by the energy of a CO2-laser beam, this melt is then driven out by the impulse of an additional process gas. Although the idea behind laser beam melt ablation is rather simple, the process itself has a major limitation in practical applications: with increasing ablation rate surface quality of the workpiece processed declines rapidly. With different ablation rates different surface structures can be distinguished, which can be characterised by suitable surface parameters. The corresponding regimes of pattern formation are found in linear and non-linear statistical properties of the recorded process emissions as well. While the ablation rate can be represented in terms of the line-energy, this parameter does not provide sufficient information about the full behaviour of the system. The dynamics of the system is dominated by oscillations due to the laser cycle but includes some periodically driven non-linear processes as well. Upon the basis of the measured time series, a corresponding model is developed. The deeper understanding of the process can be used to develop strategies for a process control.

We investigate the dynamo effect in a flow configuration introduced by G. O. Roberts in 1972. Based on a clear energetic hierarchy of Fourier components on the steady-state dynamo branch, an approximate model of interacting modes is constructed covering all essential features of the complete system but allowing simulations with a minimum amount of computation time. We use this model to study the excitation mechanism of the dynamo, the transition from stationary to time-dependent dynamo solutions and the characteristic properties of the latter ones.

We apply linear and nonlinear methods to study the properties of surfaces generated by a laser beam melt ablation process. As a result we present a characterization and ordering of the surfaces depending on the adjusted process parameters. Our findings give some insight into the performance of two widely applied multifractal analysis methods-the detrended fluctuation analysis and the wavelet transform modulus maxima method-on short real world data

Nonlinear detection of paleoclimate-variability transitions possibly related to human evolution
(2011)

Potential paleoclimatic driving mechanisms acting on human evolution present an open problem of cross-disciplinary scientific interest. The analysis of paleoclimate archives encoding the environmental variability in East Africa during the past 5 Ma has triggered an ongoing debate about possible candidate processes and evolutionary mechanisms. In this work, we apply a nonlinear statistical technique, recurrence network analysis, to three distinct marine records of terrigenous dust flux. Our method enables us to identify three epochs with transitions between qualitatively different types of environmental variability in North and East Africa during the (i) Middle Pliocene (3.35-3.15 Ma B. P.), (ii) Early Pleistocene (2.25-1.6 Ma B. P.), and (iii) Middle Pleistocene (1.1-0.7 Ma B. P.). A deeper examination of these transition periods reveals potential climatic drivers, including (i) large-scale changes in ocean currents due to a spatial shift of the Indonesian throughflow in combination with an intensification of Northern Hemisphere glaciation, (ii) a global reorganization of the atmospheric Walker circulation induced in the tropical Pacific and Indian Ocean, and (iii) shifts in the dominating temporal variability pattern of glacial activity during the Middle Pleistocene, respectively. A reexamination of the available fossil record demonstrates statistically significant coincidences between the detected transition periods and major steps in hominin evolution. This result suggests that the observed shifts between more regular and more erratic environmental variability may have acted as a trigger for rapid change in the development of humankind in Africa.

The analysis of palaeoclimate time series is usually affected by severe methodological problems, resulting primarily from non-equidistant sampling and uncertain age models. As an alternative to existing methods of time series analysis, in this paper we argue that the statistical properties of recurrence networks - a recently developed approach - are promising candidates for characterising the system's nonlinear dynamics and quantifying structural changes in its reconstructed phase space as time evolves. In a first order approximation, the results of recurrence network analysis are invariant to changes in the age model and are not directly affected by non-equidistant sampling of the data. Specifically, we investigate the behaviour of recurrence network measures for both paradigmatic model systems with non-stationary parameters and four marine records of long-term palaeoclimate variations. We show that the obtained results are qualitatively robust under changes of the relevant parameters of our method, including detrending, size of the running window used for analysis, and embedding delay. We demonstrate that recurrence network analysis is able to detect relevant regime shifts in synthetic data as well as in problematic geoscientific time series. This suggests its application as a general exploratory tool of time series analysis complementing existing methods.