Refine
Year of publication
Document Type
- Article (963)
- Preprint (363)
- Monograph/Edited Volume (353)
- Doctoral Thesis (126)
- Postprint (28)
- Other (25)
- Review (12)
- Conference Proceeding (7)
- Master's Thesis (4)
- Part of a Book (1)
Language
- English (1882) (remove)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (1882) (remove)
The homotopy classification and the index of boundary value problems for general elliptic operators
(1999)
We give the homotopy classification and compute the index of boundary value problems for elliptic equations. The classical case of operators that satisfy the Atiyah-Bott condition is studied first. We also consider the general case of boundary value problems for operators that do not necessarily satisfy the Atiyah-Bott condition.
The paper contains the proof of the index formula for manifolds with conical points. For operators subject to an additional condition of spectral symmetry, the index is expressed as the sum of multiplicities of spectral points of the conormal symbol (indicial family) and the integral from the Atiyah-Singer form over the smooth part of the manifold. The obtained formula is illustrated by the example of the Euler operator on a two-dimensional manifold with conical singular point.
We construct a theory of general boundary value problems for differential operators whose symbols do not necessarily satisfy the Atiyah-Bott condition [3] of vanishing of the corresponding obstruction. A condition for these problems to be Fredholm is introduced and the corresponding finiteness theorems are proved.
On a compact closed manifold with edges live pseudodifferential operators which are block matrices of operators with additional edge conditions like boundary conditions in boundary value problems. They include Green, trace and potential operators along the edges, act in a kind of Sobolev spaces and form an algebra with a wealthy symbolic structure. We consider complexes of Fréchet spaces whose differentials are given by operators in this algebra. Since the algebra in question is a microlocalization of the Lie algebra of typical vector fields on a manifold with edges, such complexes are of great geometric interest. In particular, the de Rham and Dolbeault complexes on manifolds with edges fit into this framework. To each complex there correspond two sequences of symbols, one of the two controls the interior ellipticity while the other sequence controls the ellipticity at the edges. The elliptic complexes prove to be Fredholm, i.e., have a finite-dimensional cohomology. Using specific tools in the algebra of pseudodifferential operators we develop a Hodge theory for elliptic complexes and outline a few applications thereof.
We describe a new algebra of boundary value problems which contains Lopatinskii elliptic as well as Toeplitz type conditions. These latter are necessary, if an analogue of the Atiyah-Bott obstruction does not vanish. Every elliptic operator is proved to admit up to a stabilisation elliptic conditions of such a kind. Corresponding boundary value problems are then Fredholm in adequate scales of spaces. The crucial novelty consists of the new type of weighted Sobolev spaces which serve as domains of pseudodifferential operators and which fit well to the nature of operators.
We consider a homogeneous pseudodifferential equation on a cylinder C = IR x X over a smooth compact closed manifold X whose symbol extends to a meromorphic function on the complex plane with values in the algebra of pseudodifferential operators over X. When assuming the symbol to be independent on the variable t element IR, we show an explicit formula for solutions of the equation. Namely, to each non-bijectivity point of the symbol in the complex plane there corresponds a finite-dimensional space of solutions, every solution being the residue of a meromorphic form manufactured from the inverse symbol. In particular, for differential equations we recover Euler's theorem on the exponential solutions. Our setting is model for the analysis on manifolds with conical points since C can be thought of as a 'stretched' manifold with conical points at t = -infinite and t = infinite.
The aim of this book is to develop the Lefschetz fixed point theory for elliptic complexes of pseudodifferential operators on manifolds with edges. The general Lefschetz theory contains the index theory as a special case, while the case to be studied is much more easier than the index problem. The main topics are: - The calculus of pseudodifferential operators on manifolds with edges, especially symbol structures (inner as well as edge symbols). - The concept of ellipticity, parametrix constructions, elliptic regularity in Sobolev spaces. - Hodge theory for elliptic complexes of pseudodifferential operators on manifolds with edges. - Development of the algebraic constructions for these complexes, such as homotopy, tensor products, duality. - A generalization of the fixed point formula of Atiyah and Bott for the case of simple fixed points. - Development of the fixed point formula also in the case of non-simple fixed points, provided that the complex consists of diferential operarators only. - Investigation of geometric complexes (such as, for instance, the de Rham complex and the Dolbeault complex). Results in this direction are desirable because of both purely mathe matical reasons and applications in natural sciences.
Green operators on manifolds with edges are known to be an ingredient of parametrices of elliptic (edge-degenerate) operators. They play a similar role as corresponding operators in boundary value problems. Close to edge singularities the Green operators have a very complex asymptotic behaviour. We give a new characterisation of Green edge symbols in terms of kernels with discrete and continuous asymptotics in the axial variable of local model cones.
The ellipticity of boundary value problems on a smooth manifold with boundary relies on a two-component principal symbolic structure (σψ; σ∂), consisting of interior and boundary symbols. In the case of a smooth edge on manifolds with boundary we have a third symbolic component, namely the edge symbol σ∧, referring to extra conditions on the edge, analogously as boundary conditions. Apart from such conditions in integral form' there may exist singular trace conditions, investigated in [6] on closed' manifolds with edge. Here we concentrate on the phenomena in combination with boundary conditions and edge problem.
We establish a quantisation of corner-degenerate symbols, here called Mellin-edge quantisation, on a manifold with second order singularities. The typical ingredients come from the "most singular" stratum of which is a second order edge where the infinite transversal cone has a base that is itself a manifold with smooth edge. The resulting operator-valued amplitude functions on the second order edge are formulated purely in terms of Mellin symbols taking values in the edge algebra over . In this respect our result is formally analogous to a quantisation rule of (Osaka J. Math. 37:221-260, 2000) for the simpler case of edge-degenerate symbols that corresponds to the singularity order 1. However, from the singularity order 2 on there appear new substantial difficulties for the first time, partly caused by the edge singularities of the cone over that tend to infinity.
The ellipticity of boundary value problems on a smooth manifold with boundary relies on a two-component principal symbolic structure (sigma(psi), sigma(partial derivative)), consisting of interior and boundary symbols. In the case of a smooth edge on manifolds with boundary, we have a third symbolic component, namely, the edge symbol sigma(boolean AND), referring to extra conditions on the edge, analogously as boundary conditions. Apart from such conditions 'in integral form' there may exist singular trace conditions, investigated in Kapanadze et al., Internal Equations and Operator Theory, 61, 241-279, 2008 on 'closed' manifolds with edge. Here, we concentrate on the phenomena in combination with boundary conditions and edge problem.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
In this paper, we define a variant of Roe algebras for spaces with cylindrical ends and use this to study questions regarding existence and classification of metrics of positive scalar curvature on such manifolds which are collared on the cylindrical end.
We discuss how our constructions are related to relative higher index theory as developed by Chang, Weinberger, and Yu and use this relationship to define higher rho-invariants for positive scalar curvature metrics on manifolds with boundary.
This paves the way for the classification of these metrics.
Finally, we use the machinery developed here to give a concise proof of a result of Schick and the author, which relates the relative higher index with indices defined in the presence of positive scalar curvature on the boundary.
Mental arithmetic is characterised by a tendency to overestimate addition and to underestimate subtraction results: the operational momentum (OM) effect. Here, motivated by contentious explanations of this effect, we developed and tested an arithmetic heuristics and biases model that predicts reverse OM due to cognitive anchoring effects. Participants produced bi-directional lines with lengths corresponding to the results of arithmetic problems. In two experiments, we found regular OM with zero problems (e.g., 3+0, 3-0) but reverse OM with non-zero problems (e.g., 2+1, 4-1). In a third experiment, we tested the prediction of our model. Our results suggest the presence of at least three competing biases in mental arithmetic: a more-or-less heuristic, a sign-space association and an anchoring bias. We conclude that mental arithmetic exhibits shortcuts for decision-making similar to traditional domains of reasoning and problem-solving.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence.
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We prove the existence of Hp(D)-limit of iterations of double layer potentials constructed with the use of Hodge parametrix on a smooth compact manifold X, D being an open connected subset of X. This limit gives us an orthogonal projection from Sobolev space Hp(D) to a closed subspace of Hp(D)-solutions of an elliptic operator P of order p ≥ 1. Using this result we obtain formulae for Sobolev solutions to the equation Pu = f in D whenever these solutions exist. This representation involves the sum of a series whose terms are iterations of double layer potentials. Similar regularization is constructed also for a P-Neumann problem in D.
Let Hsub(0), Hsub(1) be Hilbert spaces and L : Hsub(0) -> Hsub(1) be a linear bounded operator with ||L|| ≤ 1. Then L*L is a bounded linear self-adjoint non-negative operator in the Hilbert space Hsub(0) and one can use the Neumann series ∑∞sub(v=0)(I - L*L)v L*f in order to study solvability of the operator equation Lu = f. In particular, applying this method to the ill-posed Cauchy problem for solutions to an elliptic system Pu = 0 of linear PDE's of order p with smooth coefficients we obtain solvability conditions and representation formulae for solutions of the problem in Hardy spaces whenever these solutions exist. For the Cauchy-Riemann system in C the summands of the Neumann series are iterations of the Cauchy type integral. We also obtain similar results 1) for the equation Pu = f in Sobolev spaces, 2) for the Dirichlet problem and 3) for the Neumann problem related to operator P*P if P is a homogeneous first order operator and its coefficients are constant. In these cases the representations involve sums of series whose terms are iterations of integro-differential operators, while the solvability conditions consist of convergence of the series together with trivial necessary conditions.
We consider the initial value problem for the Navier-Stokes equations over R-3 x [0, T] with time T > 0 in the spatially periodic setting.
We prove that it induces open injective mappings A(s): B-1(s) -> B-2(s-1) where B-1(s), B-2(s-1) are elements from scales of specially constructed function spaces of Bochner-Sobolev typeparametrized with the smoothness index s is an element of N.
Finally, we prove that a map Asis surjective if and only if the inverse image A(s)(- 1) (K) of any pre compact set K from the range of the map Asis bounded in the Bochner space L-s([0, T], L-r(T-3))with the Ladyzhenskaya-Prodi-Serrin numbers s, r.
We consider an initial problem for the Navier-Stokes type equations associated with the de Rham complex over R-n x[0, T], n >= 3, with a positive time T. We prove that the problem induces an open injective mappings on the scales of specially constructed function spaces of Bochner-Sobolev type. In particular, the corresponding statement on the intersection of these classes gives an open mapping theorem for smooth solutions to the Navier-Stokes equations.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Formal poincare lemma
(2007)
Let X be a smooth n-dimensional manifold and D be an open connected set in X with smooth boundary OD. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Gamma subset of partial derivativeD, we obtain a family of mixed problems depending on a small parameter epsilon > 0. Although the mixed problems are subjected to a noncoercive boundary condition on partial derivativeDF in general, each of them is uniquely solvable in an appropriate Hilbert space D-T and the corresponding family {u(epsilon)} of solutions approximates the solution of the Cauchy problem in D-T whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in D-T is equivalent to the boundedness of the family {u(epsilon)}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing the solution. Examples for Dirac operators in the Euclidean space R-n are treated. In this case, we obtain a family of mixed boundary problems for the Helmholtz equation
Let X be a smooth n -dimensional manifold and D be an open connected set in X with smooth boundary ∂D. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Γ ⊂ ∂D we obtain a family of mixed problems depending on a small parameter ε > 0. Although the mixed problems are subject to a non-coercive boundary condition on ∂D\Γ in general, each of them is uniquely solvable in an appropriate Hilbert space DT and the corresponding family {uε} of solutions approximates the solution of the Cauchy problem in DT whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in DT is equivalent to the boundedness of the family {uε}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing its solution. Examples for Dirac operators in the Euclidean space Rn are considered. In the latter case we obtain a family of mixed boundary problems for the Helmholtz equation.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Let A be a determined or overdetermined elliptic differential operator on a smooth compact manifold X. Write Ssub(A)(D) for the space of solutions to thesystem Au = 0 in a domain D ⊂ X. Using reproducing kernels related to various Hilbert structures on subspaces of Ssub(A)(D) we show explicit identifications of the dual spaces. To prove the "regularity" of reproducing kernels up to the boundary of D we specify them as resolution operators of abstract Neumann problems. The matter thus reduces to a regularity theorem for the Neumann problem, a well-known example being the ∂-Neumann problem. The duality itself takes place only for those domains D which possess certain convexity properties with respect to A.
Formal Poincaré lemma
(2007)
We show how the multiple application of the formal Cauchy-Kovalevskaya theorem leads to the main result of the formal theory of overdetermined systems of partial differential equations. Namely, any sufficiently regular system Au = f with smooth coefficients on an open set U ⊂ Rn admits a solution in smooth sections of a bundle of formal power series, provided that f satisfies a compatibility condition in U.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2013)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R-n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on partial derivative D. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact selfadjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types. (C) 2013 Elsevier Inc. All rights reserved.
We consider a (generally, non-coercive) mixed boundary value problem in a bounded domain for a second order elliptic differential operator A. The differential operator is assumed to be of divergent form and the boundary operator B is of Robin type. The boundary is assumed to be a Lipschitz surface. Besides, we distinguish a closed subset of the boundary and control the growth of solutions near this set. We prove that the pair (A,B) induces a Fredholm operator L in suitable weighted spaces of Sobolev type, the weight function being a power of the distance to the singular set. Moreover, we prove the completeness of root functions related to L.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2012)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types.
We consider Dyson-Schwinger Equations (DSEs) in the context of Connes-Kreimer renormalization Hopf algebra of Feynman diagrams and Connes-Marcolli universal Tannakian formalism. This study leads us to formulate a family of Picard-Fuchs equations and a category of Feynman motivic sheaves with respect to each combinatorial DSE.
The paper deals with Sigma-composition and Sigma-essential composition of terms which lead to stable and s-stable varieties of algebras. A full description of all stable varieties of semigroups, commutative and idempotent groupoids is obtained. We use an abstract reduction system which simplifies the presentations of terms of type tau - (2) to study the variety of idempotent groupoids and s-stable varieties of groupoids. S-stable varieties are a variation of stable varieties, used to highlight replacement of subterms of a term in a deductive system instead of the usual replacement of variables by terms.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
Ancient genomes have revolutionized our understanding of Holocene prehistory and, particularly, the Neolithic transition in western Eurasia. In contrast, East Asia has so far received little attention, despite representing a core region at which the Neolithic transition took place independently ~3 millennia after its onset in the Near East. We report genome-wide data from two hunter-gatherers from Devil’s Gate, an early Neolithic cave site (dated to ~7.7 thousand years ago) located in East Asia, on the border between Russia and Korea. Both of these individuals are genetically most similar to geographically close modern populations from the Amur Basin, all speaking Tungusic languages, and, in particular, to the Ulchi. The similarity to nearby modern populations and the low levels of additional genetic material in the Ulchi imply a high level of genetic continuity in this region during the Holocene, a pattern that markedly contrasts with that reported for Europe.
Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling.
Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations.
In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions.
In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
We discuss the Cauchy problem for the so-called Chaplygin system which often appears in gas, aero- and hydrodynamics. This system can be thought of as a nonlinear analogue of the Cauchy-Riemann system in the plane. We pose Cauchy data on a part of the boundary and apply variational approach to construct a solution to this ill-posed problem. The problem actually gives insight to fundamental questions related to instable problems for nonlinear equations.
In this paper we will implement the inverse seesaw mechanism into the noncommutative framework on the basis of the AC extension of the standard model. The main difference from the classical AC model is the chiral nature of the AC fermions with respect to a U(1)(X) extension of the standard model gauge group. It is this extension which allows us to couple the right-handed neutrinos via a gauge invariant mass term to left-handed A particles. The natural scale of these gauge invariant masses is of the order of 10(17) GeV while the Dirac masses of the neutrino and the AC particles are generated dynamically and are therefore much smaller (similar to 1 to similar to 10(6) GeV). From this configuration, a working inverse seesaw mechanism for the neutrinos is obtained.
This paper provides a complete list of Krajewski diagrams representing the standard model of particle physics. We will give the possible representations of the algebra and the anomaly free lifts which provide the representation of the standard model gauge group on the fermionic Hilbert space. The algebra representations following from the Krajewski diagrams are not complete in the sense that the corresponding spectral triples do not necessarily obey to the axiom of Poincare duality. This defect may be repaired by adding new particles to the model, i.e., by building models beyond the standard model. The aim of this list of finite spectral triples (up to Poincare duality) is therefore to provide a basis for model building beyond the standard model.
In this publication we present an extension of the standard model within the framework of Connes' noncommutative geometry. The model presented here is based on a minimal spectral triple which contains the standard model particles, new vectorlike fermions, and a new U(1) gauge subgroup. Additionally a new complex scalar field appears that couples to the right-handed neutrino, the new fermions, and the standard Higgs particle. The bosonic part of the action is given by the spectral action which also determines relations among the gauge couplings, the quartic scalar couplings, and the Yukawa couplings at a cutoff energy of similar to 10(17) GeV. We investigate the renormalization group flow of these relations. The low energy behavior allows to constrain the Higgs mass, the mass of the new scalar, and the mixing between these two scalar fields.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Prospective and retrospective evaluation of five-year earthquake forecast models for California
(2017)
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node.
On Particular n-Clones
(2013)
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.