Refine
Has Fulltext
- no (48)
Year of publication
- 2016 (48) (remove)
Document Type
- Article (48) (remove)
Language
- English (48)
Is part of the Bibliography
- yes (48)
Keywords
Institute
- Institut für Mathematik (48) (remove)
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
Estimability in Cox models
(2016)
Our procedure of estimating is the maximum partial likelihood estimate (MPLE) which is the appropriate estimate in the Cox model with a general censoring distribution, covariates and an unknown baseline hazard rate . We find conditions for estimability and asymptotic estimability. The asymptotic variance matrix of the MPLE is represented and properties are discussed.
We describe a natural construction of deformation quantization on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
The paper deals with Sigma-composition and Sigma-essential composition of terms which lead to stable and s-stable varieties of algebras. A full description of all stable varieties of semigroups, commutative and idempotent groupoids is obtained. We use an abstract reduction system which simplifies the presentations of terms of type tau - (2) to study the variety of idempotent groupoids and s-stable varieties of groupoids. S-stable varieties are a variation of stable varieties, used to highlight replacement of subterms of a term in a deductive system instead of the usual replacement of variables by terms.
We use a dynamic scanning electron microscope (DySEM) to map the spatial distribution of the vibration of a cantilever beam. The DySEM measurements are based on variations of the local secondary electron signal within the imaging electron beam diameter during an oscillation period of the cantilever. For this reason, the surface of a cantilever without topography or material variation does not allow any conclusions about the spatial distribution of vibration due to a lack of dynamic contrast. In order to overcome this limitation, artificial structures were added at defined positions on the cantilever surface using focused ion beam lithography patterning. The DySEM signal of such high-contrast structures is strongly improved, hence information about the surface vibration becomes accessible. Simulations of images of the vibrating cantilever have also been performed. The results of the simulation are in good agreement with the experimental images.
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under a Holder-type sourcewise condition if the Frechet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt, Lobatto, and Radau methods.
For point processes we establish a link between integration-by-parts-and splitting-formulas which can also be considered as integration-by-parts-formulas of a new type. First we characterize finite Papangelou processes in terms of their splitting kernels. The main part then consists in extending these results to the case of infinitely extended Papangelou and, in particular, Polya and Gibbs processes. (C) 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics. We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
The present paper is intended to provide the basis for the study of weakly differentiable functions on rectifiable varifolds with locally bounded first variation. The concept proposed here is defined by means of integration-by-parts identities for certain compositions with smooth functions. In this class, the idea of zero boundary values is realised using the relative perimeter of superlevel sets. Results include a variety of Sobolev Poincare-type embeddings, embeddings into spaces of continuous and sometimes Holder-continuous functions, and point wise differentiability results both of approximate and integral type as well as coarea formulae. As a prerequisite for this study, decomposition properties of such varifolds and a relative isoperimetric inequality are established. Both involve a concept of distributional boundary of a set introduced for this purpose. As applications, the finiteness of the geodesic distance associated with varifolds with suitable summability of the mean curvature and a characterisation of curvature varifolds are obtained.
This paper introduces first-order Sobolev spaces on certain rectifiable varifolds. These complete locally convex spaces are contained in the generally non-linear class of generalised weakly differentiable functions and share key functional analytic properties with their Euclidean counterparts. Assuming the varifold to satisfy a uniform lower density bound and a dimensionally critical summability condition on its mean curvature, the following statements hold. Firstly, continuous and compact embeddings of Sobolev spaces into Lebesgue spaces and spaces of continuous functions are available. Secondly, the geodesic distance associated to the varifold is a continuous, not necessarily Holder continuous Sobolev function with bounded derivative. Thirdly, if the varifold additionally has bounded mean curvature and finite measure, then the present Sobolev spaces are isomorphic to those previously available for finite Radon measures yielding many new results for those classes as well. Suitable versions of the embedding results obtained for Sobolev functions hold in the larger class of generalised weakly differentiable functions.
A manifold M with smooth edge Y is locally near Y modelled on X-Delta x Omega for a cone X-Delta := ( (R) over bar (+) x X)/({0} x X) where Xis a smooth manifold and Omega subset of R-q an open set corresponding to a chart on Y. Compared with pseudo-differential algebras, based on other quantizations of edge-degenerate symbols, we extend the approach with Mellin representations on the r half-axis up to r = infinity, the conical exit of X-boolean AND = R+ x X (sic) (r, x) at infinity. The alternative description of the edge calculus is useful for pseudo-differential structures on manifolds with higher singularities.
Let A be a nonlinear differential operator on an open set X subset of R-n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A(u) = 0 in XS of class F satisfies this equation weakly in all of X. For the most extensively studied classes F, we show conditions on S which guarantee that S is removable for F relative to A.
Using a global symbol calculus for pseudodifferential operators on tori, we build a canonical trace on classical pseudodifferential operators on noncommutative tori in terms of a canonical discrete sum on the underlying toroidal symbols. We characterise the canonical trace on operators on the noncommutative torus as well as its underlying canonical discrete sum on symbols of fixed (resp. any) noninteger order. On the grounds of this uniqueness result, we prove that in the commutative setup, this canonical trace on the noncommutative torus reduces to Kontsevich and Vishik's canonical trace which is thereby identified with a discrete sum. A similar characterisation for the noncommutative residue on noncommutative tori as the unique trace which vanishes on trace-class operators generalises Fathizadeh and Wong's characterisation in so far as it includes the case of operators of fixed integer order. By means of the canonical trace, we derive defect formulae for regularized traces. The conformal invariance of the $ \zeta $-function at zero of the Laplacian on the noncommutative torus is then a straightforward consequence.
Using Causal Effect Networks to Analyze Different Arctic Drivers of Midlatitude Winter Circulation
(2016)
In recent years, the Northern Hemisphere midlatitudes have suffered from severe winters like the extreme 2012/13 winter in the eastern United States. These cold spells were linked to a meandering upper-tropospheric jet stream pattern and a negative Arctic Oscillation index (AO). However, the nature of the drivers behind these circulation patterns remains controversial. Various studies have proposed different mechanisms related to changes in the Arctic, most of them related to a reduction in sea ice concentrations or increasing Eurasian snow cover. Here, a novel type of time series analysis, called causal effect networks (CEN), based on graphical models is introduced to assess causal relationships and their time delays between different processes. The effect of different Arctic actors on winter circulation on weekly to monthly time scales is studied, and robust network patterns are found. Barents and Kara sea ice concentrations are detected to be important external drivers of the midlatitude circulation, influencing winter AO via tropospheric mechanisms and through processes involving the stratosphere. Eurasia snow cover is also detected to have a causal effect on sea level pressure in Asia, but its exact role on AO remains unclear. The CEN approach presented in this study overcomes some difficulties in interpreting correlation analyses, complements model experiments for testing hypotheses involving teleconnections, and can be used to assess their validity. The findings confirm that sea ice concentrations in autumn in the Barents and Kara Seas are an important driver of winter circulation in the midlatitudes.