### Refine

#### Year of publication

- 2020 (13)
- 2019 (40)
- 2018 (7)
- 2017 (27)
- 2016 (70)
- 2015 (63)
- 2014 (53)
- 2013 (53)
- 2012 (60)
- 2011 (37)
- 2010 (35)
- 2009 (33)
- 2008 (17)
- 2007 (15)
- 2006 (56)
- 2005 (76)
- 2004 (59)
- 2003 (50)
- 2002 (51)
- 2001 (74)
- 2000 (71)
- 1999 (96)
- 1998 (104)
- 1997 (104)
- 1996 (73)
- 1995 (100)
- 1994 (73)
- 1993 (16)
- 1992 (18)
- 1991 (4)

#### Document Type

- Article (826)
- Monograph/edited volume (423)
- Doctoral Thesis (132)
- Preprint (95)
- Other (38)
- Review (13)
- Postprint (9)
- Conference Proceeding (7)
- Part of a Book (3)
- Master's Thesis (3)

#### Is part of the Bibliography

- yes (1549) (remove)

#### Keywords

- cluster expansion (6)
- reciprocal class (6)
- Cauchy problem (5)
- Fredholm property (5)
- index (5)
- Earthquake interaction (4)
- Elliptic complexes (4)
- Pseudo-differential operators (4)
- Statistical seismology (4)
- Toeplitz operators (4)

#### Institute

- Institut für Mathematik (1549) (remove)

Entdeckendes Lernen
(2017)

Trotz der nachweislichen Popularität des Entdeckenden Lernens in der deutschsprachigen Mathematikdidaktik finden sich aktuell keine kritischen Beiträge, die dazu beitragen könnten, dieses grundlegende Unterrichtskonzept zu hinterfragen und auszuschärfen. In diesem Diskussionsbeitrag werden zunächst die Theorie und einige Umsetzungsbeispiele des Entdeckenden Lernens herausgearbeitet, um aufzuzeigen, dass das Entdeckende Lernen einem vagen Sammelbegriff gleicht, unter dem oft fragwürdige Unterrichtsumgebungen legitimiert werden. Anschließend werden an Hand erkenntnistheoretischer, lerntheoretischer, didaktischer und soziokultureller Betrachtungen Probleme des Entdeckenden Lernens im Mathematikunterricht und Möglichkeiten ihrer Überwindung thematisiert. Dabei zeigt sich, dass die Konzeption des Entdeckenden Lernens hinter dem aktuellen mathematikdidaktischen Erkenntnisstand zurückfällt und Lehrer sowie Schüler mit unmöglichen Forderungen konfrontiert, dass lerntheoretische Vorteile des Entdeckenden Lernens oft nicht nachweisbar sind, dass die Idee des Entdeckens auf einem problematischen platonistischen Verständnis von Erkenntnis beruht und dass Entdeckendes Lernen bildungsferne Schüler zu benachteiligen droht. Abschließend werden Forschungsdesiderata abgeleitet, deren Bearbeitung dazu beitragen könnte, die aufgezeigten Problemfelder zu überwinden.

High-precision observations of the present-day geomagnetic field by ground-based observatories and satellites provide unprecedented conditions for unveiling the dynamics of the Earth’s core. Combining geomagnetic observations with dynamo simulations in a data assimilation (DA) framework allows the reconstruction of past and present states of the internal core dynamics. The essential information that couples the internal state to the observations is provided by the statistical correlations from a numerical dynamo model in the form of a model covariance matrix. Here we test a sequential DA framework, working through a succession of forecast and analysis steps, that extracts the correlations from an ensemble of dynamo models. The primary correlations couple variables of the same azimuthal wave number, reflecting the predominant axial symmetry of the magnetic field. Synthetic tests show that the scheme becomes unstable when confronted with high-precision geomagnetic observations. Our study has identified spurious secondary correlations as the origin of the problem. Keeping only the primary correlations by localizing the covariance matrix with respect to the azimuthal wave number suffices to stabilize the assimilation. While the first analysis step is fundamental in constraining the large-scale interior state, further assimilation steps refine the smaller and more dynamical scales. This refinement turns out to be critical for long-term geomagnetic predictions. Increasing the assimilation steps from one to 18 roughly doubles the prediction horizon for the dipole from about tree to six centuries, and from 30 to about 60 yr for smaller observable scales. This improvement is also reflected on the predictability of surface intensity features such as the South Atlantic Anomaly. Intensity prediction errors are decreased roughly by a half when assimilating long observation sequences.

The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.

A term, also called a tree, is said to be linear, if each variable occurs in the term only once. The linear terms and sets of linear terms, the so-called linear tree languages, play some role in automata theory and in the theory of formal languages in connection with recognizability. We define a partial superposition operation on sets of linear trees of a given type and study the properties of some many-sorted partial clones that have sets of linear trees as elements and partial superposition operations as fundamental operations. The endomorphisms of those algebras correspond to nondeterministic linear hypersubstitutions.

An efficient immunosurveillance of CD8(+) T cells in the periphery depends on positive/negative selection of thymocytes and thus on the dynamics of antigen degradation and epitope production by thymoproteasome and immunoproteasome in the thymus. Although studies in mouse systems have shown how thymoproteasome activity differs from that of immunoproteasome and strongly impacts the T cell repertoire, the proteolytic dynamics and the regulation of human thymoproteasome are unknown. By combining biochemical and computational modeling approaches, we show here that human 20S thymoproteasome and immunoproteasome differ not only in the proteolytic activity of the catalytic sites but also in the peptide transport. These differences impinge upon the quantity of peptide products rather than where the substrates are cleaved. The comparison of the two human 20S proteasome isoforms depicts different processing of antigens that are associated to tumors and autoimmune diseases.

We study corner-degenerate pseudo-differential operators of any singularity order and develop ellipticity based on the principal symbolic hierarchy, associated with the stratification of the underlying space. We construct parametrices within the calculus and discuss the aspect of additional trace and potential conditions along lower-dimensional strata.

For n∈N , let Xn={a1,a2,…,an} be an n-element set and let F=(Xn;<f) be a fence, also called a zigzag poset. As usual, we denote by In the symmetric inverse semigroup on Xn. We say that a transformation α∈In is fence-preserving if x<fy implies that xα<fyα, for all x,y in the domain of α. In this paper, we study the semigroup PFIn of all partial fence-preserving injections of Xn and its subsemigroup IFn={α∈PFIn:α−1∈PFIn}. Clearly, IFn is an inverse semigroup and contains all regular elements of PFIn. We characterize the Green’s relations for the semigroup IFn. Further, we prove that the semigroup IFn is generated by its elements with rank ≥n−2. Moreover, for n∈2N, we find the least generating set and calculate the rank of IFn.

Fractures serve as highly conductive preferential flow paths for fluids in rocks, which are difficult to exactly reconstruct in numerical models. Especially, in low-conductive rocks, fractures are often the only pathways for advection of solutes and heat. The presented study compares the results from hydraulic and tracer tomography applied to invert a theoretical discrete fracture network (DFN) that is based on data from synthetic cross-well testing. For hydraulic tomography, pressure pulses in various injection intervals are induced and the pressure responses in the monitoring intervals of a nearby observation well are recorded. For tracer tomography, a conservative tracer is injected in different well levels and the depth-dependent breakthrough of the tracer is monitored. A recently introduced transdimensional Bayesian inversion procedure is applied for both tomographical methods, which adjusts the fracture positions, orientations, and numbers based on given geometrical fracture statistics. The used Metropolis-Hastings-Green algorithm is refined by the simultaneous estimation of the measurement error’s variance, that is, the measurement noise. Based on the presented application to invert the two-dimensional cross-section between source and the receiver well, the hydraulic tomography reveals itself to be more suitable for reconstructing the original DFN. This is based on a probabilistic representation of the inverted results by means of fracture probabilities.

We study the mathematical structure underlying the concept of locality which lies at the heart of classical and quantum field theory, and develop a machinery used to preserve locality during the renormalisation procedure. Viewing renormalisation in the framework of Connes and Kreimer as the algebraic Birkhoff factorisation of characters on a Hopf algebra with values in a Rota-Baxter algebra, we build locality variants of these algebraic structures, leading to a locality variant of the algebraic Birkhoff factorisation. This provides an algebraic formulation of the conservation of locality while renormalising. As an application in the context of the Euler-Maclaurin formula on lattice cones, we renormalise the exponential generating function which sums over the lattice points in a lattice cone. As a consequence, for a suitable multivariate regularisation, renormalisation from the algebraic Birkhoff factorisation amounts to composition by a projection onto holomorphic multivariate germs.

A term t is linear if no variable occurs more than once in t. An identity s ≈ t is said to be linear if s and t are linear terms. Identities are particular formulas. As for terms superposition operations can be defined for formulas too. We define the arbitrary linear formulas and seek for a condition for the set of all linear formulas to be closed under superposition. This will be used to define the partial superposition operations on the set of linear formulas and a partial many-sorted algebra Formclonelin(τ, τ′). This algebra has similar properties with the partial many-sorted clone of all linear terms. We extend the concept of a hypersubstitution of type τ to the linear hypersubstitutions of type (τ, τ′) for algebraic systems. The extensions of linear hypersubstitutions of type (τ, τ′) send linear formulas to linear formulas, presenting weak endomorphisms of Formclonelin(τ, τ′).

Europa Universalis IV
(2020)

Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.

On a smooth complete Riemannian spin manifold with smooth compact boundary, we demonstrate that Atiyah-Singer Dirac operator in depends Riesz continuously on perturbations of local boundary conditions The Lipschitz bound for the map depends on Lipschitz smoothness and ellipticity of and bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius away from a compact neighbourhood of the boundary. More generally, we prove perturbation estimates for functional calculi of elliptic operators on manifolds with local boundary conditions.

This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.

Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.

In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.

Tomographic Reservoir Imaging with DNA-Labeled Silica Nanotracers: The First Field Validation
(2018)

This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource-multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.

Low thermal conductivity boulder with high porosity identified on C-type asteroid (162173) Ryugu
(2019)

C-type asteroids are among the most pristine objects in the Solar System, but little is known about their interior structure and surface properties. Telescopic thermal infrared observations have so far been interpreted in terms of a regolith-covered surface with low thermal conductivity and particle sizes in the centimetre range. This includes observations of C-type asteroid (162173) Ryugu1,2,3. However, on arrival of the Hayabusa2 spacecraft at Ryugu, a regolith cover of sand- to pebble-sized particles was found to be absent4,5 (R.J. et al., manuscript in preparation). Rather, the surface is largely covered by cobbles and boulders, seemingly incompatible with the remote-sensing infrared observations. Here we report on in situ thermal infrared observations of a boulder on the C-type asteroid Ryugu. We found that the boulder’s thermal inertia was much lower than anticipated based on laboratory measurements of meteorites, and that a surface covered by such low-conductivity boulders would be consistent with remote-sensing observations. Our results furthermore indicate high boulder porosities as well as a low tensile strength in the few hundred kilopascal range. The predicted low tensile strength confirms the suspected observational bias6 in our meteorite collections, as such asteroidal material would be too frail to survive atmospheric entry7

This paper concerns the problem of predicting the maximum expected earthquake magnitude μ in a future time interval Tf given a catalog covering a time period T in the past. Different studies show the divergence of the confidence interval of the maximum possible earthquake magnitude m_{ max } for high levels of confidence (Salamat et al. 2017). Therefore, m_{ max } should be better replaced by μ (Holschneider et al. 2011). In a previous study (Salamat et al. 2018), μ is estimated for an instrumental earthquake catalog of Iran from 1900 onwards with a constant level of completeness ( {m0 = 5.5} ). In the current study, the Bayesian methodology developed by Zöller et al. (2014, 2015) is applied for the purpose of predicting μ based on the catalog consisting of both historical and instrumental parts. The catalog is first subdivided into six subcatalogs corresponding to six seismotectonic zones, and each of those zone catalogs is subsequently subdivided according to changes in completeness level and magnitude uncertainty. For this, broad and small error distributions are considered for historical and instrumental earthquakes, respectively. We assume that earthquakes follow a Poisson process in time and Gutenberg-Richter law in the magnitude domain with a priori unknown a and b values which are first estimated by Bayes' theorem and subsequently used to estimate μ. Imposing different values of m_{ max } for different seismotectonic zones namely Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh and Makran, the results show considerable probabilities for the occurrence of earthquakes with Mw ≥ 7.5 in short Tf , whereas for long Tf, μ is almost equal to m_{ max }

This paper presents a scalable E-band radar platform based on single-channel fully integrated transceivers (TRX) manufactured using 130-nm silicon-germanium (SiGe) BiCMOS technology. The TRX is suitable for flexible radar systems exploiting massive multiple-input-multipleoutput (MIMO) techniques for multidimensional sensing. A fully integrated fractional-N phase-locked loop (PLL) comprising a 39.5-GHz voltage-controlled oscillator is used to generate wideband frequency-modulated continuous-wave (FMCW) chirp for E-band radar front ends. The TRX is equipped with a vector modulator (VM) for high-speed carrier modulation and beam-forming techniques. A single TRX achieves 19.2-dBm maximum output power and 27.5-dB total conversion gain with input-referred 1-dB compression point of -10 dBm. It consumes 220 mA from 3.3-V supply and occupies 3.96 mm(2) silicon area. A two-channel radar platform based on full-custom TRXs and PLL was fabricated to demonstrate high-precision and high-resolution FMCW sensing. The radar enables up to 10-GHz frequency ramp generation in 74-84-GHz range, which results in 1.5-cm spatial resolution. Due to high output power, thus high signal-to-noise ratio (SNR), a ranging precision of 7.5 mu m for a target at 2 m was achieved. The proposed architecture supports scalable multichannel applications for automotive FMCW using a single local oscillator (LO).

Low thermal conductivity boulder with high porosity identified on C-type asteroid (162173) Ryugu
(2019)

C-type asteroids are among the most pristine objects in the Solar System, but little is known about their interior structure and surface properties. Telescopic thermal infrared observations have so far been interpreted in terms of a regolith-covered surface with low thermal conductivity and particle sizes in the centimetre range. This includes observations of C-type asteroid (162173) Ryugu1,2,3. However, on arrival of the Hayabusa2 spacecraft at Ryugu, a regolith cover of sand- to pebble-sized particles was found to be absent4,5 (R.J. et al., manuscript in preparation). Rather, the surface is largely covered by cobbles and boulders, seemingly incompatible with the remote-sensing infrared observations. Here we report on in situ thermal infrared observations of a boulder on the C-type asteroid Ryugu. We found that the boulder’s thermal inertia was much lower than anticipated based on laboratory measurements of meteorites, and that a surface covered by such low-conductivity boulders would be consistent with remote-sensing observations. Our results furthermore indicate high boulder porosities as well as a low tensile strength in the few hundred kilopascal range. The predicted low tensile strength confirms the suspected observational bias6 in our meteorite collections, as such asteroidal material would be too frail to survive atmospheric entry7.

A zig-zag (or fence) order is a special partial order on a (finite) set. In this paper, we consider the semigroup TFn of all order-preserving transformations on an n-element zig-zag-ordered set. We determine the rank of TFn and provide a minimal generating set for TFn. Moreover, a formula for the number of idempotents in TFn is given.

We prove a version of the Hopf-Rinow theorem with respect to path metrics on discrete spaces. The novel aspect is that we do not a priori assume local finiteness but isolate a local finiteness type condition, called essentially locally finite, that is indeed necessary. As a side product we identify the maximal weight, called the geodesic weight, generating the path metric in the situation when the space is complete with respect to any of the equivalent notions of completeness proven in the Hopf-Rinow theorem. As an application we characterize the graphs for which the resistance metric is a path metric induced by the graph structure.

The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.

We study elements of the calculus of boundary value problems in a variant of Boutet de Monvel’s algebra (Acta Math 126:11–51, 1971) on a manifold N with edge and boundary. If the boundary is empty then the approach corresponds to Schulze (Symposium on partial differential equations (Holzhau, 1988), BSB Teubner, Leipzig, 1989) and other papers from the subsequent development. For non-trivial boundary we study Mellin-edge quantizations and compositions within the structure in terms a new Mellin-edge quantization, compared with a more traditional technique. Similar structures in the closed case have been studied in Gil et al.

The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence.

We show that the Dirac operator on a compact globally hyperbolic Lorentzian spacetime with spacelike Cauchy boundary is a Fredholm operator if appropriate boundary conditions are imposed. We prove that the index of this operator is given by the same expression as in the index formula of Atiyah-Patodi-Singer for Riemannian manifolds with boundary. The index is also shown to equal that of a certain operator constructed from the evolution operator and a spectral projection on the boundary. In case the metric is of product type near the boundary a Feynman parametrix is constructed.

Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action (-log[P]). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s(tn);tn=t0+n Delta t and, using methods of nonlinear time series analysis, show how to produce a DE>1-dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s(tn) time series. In that DE-dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on DE-dimensional input and producing DE-dimensional outputs.

We generalise disagreement percolation to Gibbs point processes of balls with varying radii. This allows to establish the uniqueness of the Gibbs measure and exponential decay of pair correlations in the low activity regime by comparison with a sub-critical Boolean model. Applications to the Continuum Random Cluster model and the Quermass-interaction model are presented. At the core of our proof lies an explicit dependent thinning from a Poisson point process to a dominated Gibbs point process. (C) 2018 Elsevier B.V. All rights reserved.

We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a tau-mixing process.

We study the spectral location of a strongly pattern equivariant Hamiltonians arising through configurations on a colored lattice. Roughly speaking, two configurations are "close to each other" if, up to a translation, they "almost coincide" on a large fixed ball. The larger this ball, the more similar they are, and this induces a metric on the space of the corresponding dynamical systems. Our main result states that the map which sends a given configuration into the spectrum of its associated Hamiltonian, is Holder (even Lipschitz) continuous in the usual Hausdorff metric. Specifically, the spectral distance of two Hamiltonians is estimated by the distance of the corresponding dynamical systems.

Probabilistic integration of a continuous dynamical system is a way of systematically introducing discretisation error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al. (Stat Comput 27(4):1065-1082, 2017. ), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.

We construct eta- and rho-invariants for Dirac operators, on the universal covering of a closed manifold, that are invariant under the projective action associated to a 2-cocycle of the fundamental group. We prove an Atiyah-Patodi-Singer index theorem in this setting, as well as its higher generalisation. Applications concern the classification of positive scalar curvature metrics on closed spin manifolds. We also investigate the properties of these twisted invariants for the signature operator and the relation to the higher invariants.

We present new conditions for semigroups of positive operators to converge strongly as time tends to infinity. Our proofs are based on a novel approach combining the well-known splitting theorem by Jacobs, de Leeuw, and Glicksberg with a purely algebraic result about positive group representations. Thus, we obtain convergence theorems not only for one-parameter semigroups but also for a much larger class of semigroup representations. Our results allow for a unified treatment of various theorems from the literature that, under technical assumptions, a bounded positive C-0-semigroup containing or dominating a kernel operator converges strongly as t ->infinity. We gain new insights into the structure theoretical background of those theorems and generalize them in several respects; especially we drop any kind of continuity or regularity assumption with respect to the time parameter.

We prove the Fréchet differentiability with respect to the drift of Perron–Frobenius and Koopman operators associated to time-inhomogeneous ordinary stochastic differential equations. This result relies on a similar differentiability result for pathwise expectations of path functionals of the solution of the stochastic differential equation, which we establish using Girsanov's formula. We demonstrate the significance of our result in the context of dynamical systems and operator theory, by proving continuously differentiable drift dependence of the simple eigen- and singular values and the corresponding eigen- and singular functions of the stochastic Perron–Frobenius and Koopman operators.

Our first result concerns a characterization by means of a functional equation of Poisson point processes conditioned by the value of their first moment. It leads to a generalized version of Mecke’s formula. En passant, it also allows us to gain quantitative results about stochastic domination for Poisson point processes under linear constraints. Since bridges of a pure jump Lévy process in Rd with a height a can be interpreted as a Poisson point process on space–time conditioned by pinning its first moment to a, our approach allows us to characterize bridges of Lévy processes by means of a functional equation. The latter result has two direct applications: First, we obtain a constructive and simple way to sample Lévy bridge dynamics; second, it allows us to estimate the number of jumps for such bridges. We finally show that our method remains valid for linearly perturbed Lévy processes like periodic Ornstein–Uhlenbeck processes driven by Lévy noise.

The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.

We develop a technique for the multivariate data analysis of perturbed self-sustained oscillators. The approach is based on the reconstruction of the phase dynamics model from observations and on a subsequent exploration of this model. For the system, driven by several inputs, we suggest a dynamical disentanglement procedure, allowing us to reconstruct the variability of the system's output that is due to a particular observed input, or, alternatively, to reconstruct the variability which is caused by all the inputs except for the observed one. We focus on the application of the method to the vagal component of the heart rate variability caused by a respiratory influence. We develop an algorithm that extracts purely respiratory-related variability, using a respiratory trace and times of R-peaks in the electrocardiogram. The algorithm can be applied to other systems where the observed bivariate data can be represented as a point process and a slow continuous signal, e.g. for the analysis of neuronal spiking. This article is part of the theme issue 'Coupling functions: dynamical interaction mechanisms in the physical, biological and social sciences'.

This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.

The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018.
The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan.
A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here.

Hypersubstitutions are mappings which map operation symbols to terms. Terms can be visualized by trees. Hypersubstitutions can be extended to mappings defined on sets of trees. The nodes of the trees, describing terms, are labelled by operation symbols and by colors, i.e. certain positive integers. We are interested in mappings which map differently-colored operation symbols to different terms. In this paper we extend the theory of hypersubstitutions and solid varieties to multi-hypersubstitutions and colored solid varieties. We develop the interconnections between such colored terms and multihypersubstitutions and the equational theory of Universal Algebra. The collection of all varieties of a given type forms a complete lattice which is very complex and difficult to study; multi-hypersubstitutions and colored solid varieties offer a new method to study complete sublattices of this lattice.

The efficient time integration of the dynamic core equations for numerical weather prediction (NWP) remains a key challenge. One of the most popular methods is currently provided by implementations of the semi-implicit semi-Lagrangian (SISL) method, originally proposed by Robert (J. Meteorol. Soc. Jpn., 1982). Practical implementations of the SISL method are, however, not without certain shortcomings with regard to accuracy, conservation properties and stability. Based on recent work by Gottwald, Frank and Reich (LNCSE, Springer, 2002), Frank, Reich, Staniforth, White and Wood (Atm. Sci. Lett., 2005) and Wood, Staniforth and Reich (Atm. Sci. Lett., 2006) we propose an alternative semi-Lagrangian implementation based on a set of regularized equations and the popular Stormer-Verlet time stepping method in the context of the shallow-water equations (SWEs). Ultimately, the goal is to develop practical implementations for the 3D Euler equations that overcome some or all shortcomings of current SISL implementations.

In this study we present iterative regularization methods using rational approximations, in particular, Pade approximants, which work well for ill-posed problems. We prove that the (k,j)-Pade method is a convergent and order optimal iterative regularization method in using the discrepancy principle of Morozov. Furthermore, we present a hybrid Pade method, compare it with other well-known methods and found that it is faster than the Landweber method. It is worth mentioning that this study is a completion of the paper [A. Kirsche, C. Bockmann, Rational approximations for ill-conditioned equation systems, Appl. Math. Comput. 171 (2005) 385-397] where this method was treated to solve ill-conditioned equation systems. (c) 2006 Elsevier Inc. All rights reserved.