Refine
Has Fulltext
- no (47) (remove)
Year of publication
- 2015 (47) (remove)
Document Type
- Article (40)
- Doctoral Thesis (3)
- Monograph/Edited Volume (1)
- Part of a Book (1)
- Conference Proceeding (1)
- Other (1)
Is part of the Bibliography
- yes (47)
Keywords
- Ellipticity of corner-degenerate operators (2)
- Fredholm property (2)
- Toeplitz operators (2)
- 31A25 (1)
- 65F18 (1)
- AERONET (1)
- AMS (1)
- Aerosols (1)
- Asymptotics of solutions (1)
- Averaging principle (1)
Institute
- Institut für Mathematik (47) (remove)
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
Nonlinear data assimilation
(2015)
This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters.
The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.
In this work we extract the microphysical properties of aerosols for a collection of measurement cases with low volume depolarization ratio originating from fire sources captured by the Raman lidar located at the National Institute of Optoelectronics (INOE) in Bucharest. Our algorithm was tested not only for pure smoke but also for mixed smoke and urban aerosols of variable age and growth. Applying a sensitivity analysis on initial parameter settings of our retrieval code was proved vital for producing semi-automatized retrievals with a hybrid regularization method developed at the Institute of Mathematics of Potsdam University. A direct quantitative comparison of the retrieved microphysical properties with measurements from a Compact Time of Flight Aerosol Mass Spectrometer (CToF-AMS) is used to validate our algorithm. Microphysical retrievals performed with sun photometer data are also used to explore our results. Focusing on the fine mode we observed remarkable similarities between the retrieved size distribution and the one measured by the AMS. More complicated atmospheric structures and the factor of absorption appear to depend more on particle radius being subject to variation. A good correlation was found between the aerosol effective radius and particle age, using the ratio of lidar ratios (LR: aerosol extinction to backscatter ratios) as an indicator for the latter. Finally, the dependence on relative humidity of aerosol effective radii measured on the ground and within the layers aloft show similar patterns. (C) 2015 Elsevier Inc. All rights reserved.
Boundary value problems on a smooth manifold X with boundary have the structure of edge problems. Operators A are described in terms of a principal symbolic hierarchy, namely, according to the stratification of X, with the interior and the boundary We focus here on operators with and without the transmission property and establish a new relationship between boundary symbols and operators in the cone calculus transversal to the boundary.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
We present simulations of binary black-hole mergers in which, after the common outer horizon has formed, the marginally outer trapped surfaces (MOTSs) corresponding to the individual black holes continue to approach and eventually penetrate each other. This has very interesting consequences according to recent results in the theory of MOTSs. Uniqueness and stability theorems imply that two MOTSs which touch with a common outer normal must be identical. This suggests a possible dramatic consequence of the collision between a small and large black hole. If the penetration were to continue to completion, then the two MOTSs would have to coalesce, by some combination of the small one growing and the big one shrinking. Here we explore the relationship between theory and numerical simulations, in which a small black hole has halfway penetrated a large one.
The regularity of solutions to elliptic equations on a manifold with singularities, say, an edge, can be formulated in terms of asymptotics in the distance variable r > 0 to the singularity. In simplest form such asymptotics turn to a meromorphic behaviour under applying the Mellin transform on the half-axis. Poles, multiplicity, and Laurent coefficients form a system of asymptotic data which depend on the specific operator. Moreover, these data may depend on the variable y along the edge. We then have y-dependent families of meromorphic functions with variable poles, jumping multiplicities and a discontinuous dependence of Laurent coefficients on y. We study here basic phenomena connected with such variable branching asymptotics, formulated in terms of variable continuous asymptotics with a y-wise discrete behaviour.
In this study, we analyze acoustic emission (AE) data recorded at the Morsleben salt mine, Germany, to assess the catalog completeness, which plays an important role in any seismicity analysis. We introduce the new concept of a magnitude completeness interval consisting of a maximum magnitude of completeness (M-c(max)) in addition to the well-known minimum magnitude of completeness. This is required to describe the completeness of the catalog, both for the smallest events (for which the detection performance may be low) and for the largest ones (which may be missed because of sensors saturation). We suggest a method to compute the maximum magnitude of completeness and calculate it for a spatial grid based on (1) the prior estimation of saturation magnitude at each sensor, (2) the correction of the detection probability function at each sensor, including a drop in the detection performance when it saturates, and (3) the combination of detection probabilities of all sensors to obtain the network detection performance. The method is tested using about 130,000 AE events recorded in a period of five weeks, with sources confined within a small depth interval, and an example of the spatial distribution of M-c(max) is derived. The comparison between the spatial distribution of M-c(max) and of the maximum possible magnitude (M-max), which is here derived using a recently introduced Bayesian approach, indicates that M-max exceeds M-c(max) in some parts of the mine. This suggests that some large and important events may be missed in the catalog, which could lead to a bias in the hazard evaluation.
By edge algebra we understand a pseudo-differential calculus on a manifold with edge. The operators have a two-component principal symbolic hierarchy which determines operators up to lower order terms. Those belong to a filtration of the corresponding operator spaces. We give a new characterisation of this structure, based on an alternative representation of edge amplitude functions only containing holomorphic edge-degenerate Mellin symbols.
We consider the semiclassical asymptotic expansion of the heat kernel coming from Witten's perturbation of the de Rham complex by a given function. For the index, one obtains a time-dependent integral formula which is evaluated by the method of stationary phase to derive the Poincare-Hopf theorem. We show how this method is related to approaches using the Thom form of Mathai and Quillen. Afterwards, we use a more general version of the stationary phase approximation in the case that the perturbing function has critical submanifolds to derive a degenerate version of the Poincare-Hopf theorem.
We consider the volume- normalized Ricci flow close to compact shrinking Ricci solitons. We show that if a compact Ricci soliton (M, g) is a local maximum of Perelman's shrinker entropy, any normalized Ricci flowstarting close to it exists for all time and converges towards a Ricci soliton. If g is not a local maximum of the shrinker entropy, we showthat there exists a nontrivial normalized Ricci flow emerging from it. These theorems are analogues of results in the Ricci- flat and in the Einstein case (Haslhofer and Muller, arXiv:1301.3219, 2013; Kroncke, arXiv: 1312.2224, 2013).
We study infinitesimal Einstein deformations on compact flat manifolds and on product manifolds. Moreover, we prove refinements of results by Koiso and Bourguignon which yield obstructions on the existence of infinitesimal Einstein deformations under certain curvature conditions. (C) 2014 Elsevier B.V. All rights reserved.
Certain curvature conditions for the stability of Einstein manifolds with respect to the Einstein-Hilbert action are given. These conditions are given in terms of quantities involving the Weyl tensor and the Bochner tensor. In dimension six, a stability criterion involving the Euler characteristic is given.
In 2015 the second conference „Cloud Storage Deployment in Academics“ took place. Interest regarding this issue was again high and topics established in 2014 like data security and scalability were complemented by new ones like federations or technical integration in existing infrastructures. This is caused by the advances in the establishment of cloud-based storage systems. This publication contains the contributions of the conference „Cloud Storage Deployment in Academics 2015“, which took place in may 2015 at TU Berlin.