Refine
Year of publication
Document Type
- Article (963)
- Preprint (363)
- Monograph/Edited Volume (353)
- Doctoral Thesis (126)
- Postprint (28)
- Other (25)
- Review (12)
- Conference Proceeding (7)
- Master's Thesis (4)
- Part of a Book (1)
Language
- English (1882) (remove)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (1882) (remove)
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Formal poincare lemma
(2007)
Let X be a smooth n-dimensional manifold and D be an open connected set in X with smooth boundary OD. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Gamma subset of partial derivativeD, we obtain a family of mixed problems depending on a small parameter epsilon > 0. Although the mixed problems are subjected to a noncoercive boundary condition on partial derivativeDF in general, each of them is uniquely solvable in an appropriate Hilbert space D-T and the corresponding family {u(epsilon)} of solutions approximates the solution of the Cauchy problem in D-T whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in D-T is equivalent to the boundedness of the family {u(epsilon)}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing the solution. Examples for Dirac operators in the Euclidean space R-n are treated. In this case, we obtain a family of mixed boundary problems for the Helmholtz equation
Let X be a smooth n -dimensional manifold and D be an open connected set in X with smooth boundary ∂D. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Γ ⊂ ∂D we obtain a family of mixed problems depending on a small parameter ε > 0. Although the mixed problems are subject to a non-coercive boundary condition on ∂D\Γ in general, each of them is uniquely solvable in an appropriate Hilbert space DT and the corresponding family {uε} of solutions approximates the solution of the Cauchy problem in DT whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in DT is equivalent to the boundedness of the family {uε}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing its solution. Examples for Dirac operators in the Euclidean space Rn are considered. In the latter case we obtain a family of mixed boundary problems for the Helmholtz equation.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Let A be a determined or overdetermined elliptic differential operator on a smooth compact manifold X. Write Ssub(A)(D) for the space of solutions to thesystem Au = 0 in a domain D ⊂ X. Using reproducing kernels related to various Hilbert structures on subspaces of Ssub(A)(D) we show explicit identifications of the dual spaces. To prove the "regularity" of reproducing kernels up to the boundary of D we specify them as resolution operators of abstract Neumann problems. The matter thus reduces to a regularity theorem for the Neumann problem, a well-known example being the ∂-Neumann problem. The duality itself takes place only for those domains D which possess certain convexity properties with respect to A.
Formal Poincaré lemma
(2007)
We show how the multiple application of the formal Cauchy-Kovalevskaya theorem leads to the main result of the formal theory of overdetermined systems of partial differential equations. Namely, any sufficiently regular system Au = f with smooth coefficients on an open set U ⊂ Rn admits a solution in smooth sections of a bundle of formal power series, provided that f satisfies a compatibility condition in U.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2013)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R-n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on partial derivative D. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact selfadjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types. (C) 2013 Elsevier Inc. All rights reserved.
We consider a (generally, non-coercive) mixed boundary value problem in a bounded domain for a second order elliptic differential operator A. The differential operator is assumed to be of divergent form and the boundary operator B is of Robin type. The boundary is assumed to be a Lipschitz surface. Besides, we distinguish a closed subset of the boundary and control the growth of solutions near this set. We prove that the pair (A,B) induces a Fredholm operator L in suitable weighted spaces of Sobolev type, the weight function being a power of the distance to the singular set. Moreover, we prove the completeness of root functions related to L.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2012)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types.
We consider Dyson-Schwinger Equations (DSEs) in the context of Connes-Kreimer renormalization Hopf algebra of Feynman diagrams and Connes-Marcolli universal Tannakian formalism. This study leads us to formulate a family of Picard-Fuchs equations and a category of Feynman motivic sheaves with respect to each combinatorial DSE.
The paper deals with Sigma-composition and Sigma-essential composition of terms which lead to stable and s-stable varieties of algebras. A full description of all stable varieties of semigroups, commutative and idempotent groupoids is obtained. We use an abstract reduction system which simplifies the presentations of terms of type tau - (2) to study the variety of idempotent groupoids and s-stable varieties of groupoids. S-stable varieties are a variation of stable varieties, used to highlight replacement of subterms of a term in a deductive system instead of the usual replacement of variables by terms.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
Ancient genomes have revolutionized our understanding of Holocene prehistory and, particularly, the Neolithic transition in western Eurasia. In contrast, East Asia has so far received little attention, despite representing a core region at which the Neolithic transition took place independently ~3 millennia after its onset in the Near East. We report genome-wide data from two hunter-gatherers from Devil’s Gate, an early Neolithic cave site (dated to ~7.7 thousand years ago) located in East Asia, on the border between Russia and Korea. Both of these individuals are genetically most similar to geographically close modern populations from the Amur Basin, all speaking Tungusic languages, and, in particular, to the Ulchi. The similarity to nearby modern populations and the low levels of additional genetic material in the Ulchi imply a high level of genetic continuity in this region during the Holocene, a pattern that markedly contrasts with that reported for Europe.
Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling.
Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations.
In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions.
In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
We discuss the Cauchy problem for the so-called Chaplygin system which often appears in gas, aero- and hydrodynamics. This system can be thought of as a nonlinear analogue of the Cauchy-Riemann system in the plane. We pose Cauchy data on a part of the boundary and apply variational approach to construct a solution to this ill-posed problem. The problem actually gives insight to fundamental questions related to instable problems for nonlinear equations.
In this paper we will implement the inverse seesaw mechanism into the noncommutative framework on the basis of the AC extension of the standard model. The main difference from the classical AC model is the chiral nature of the AC fermions with respect to a U(1)(X) extension of the standard model gauge group. It is this extension which allows us to couple the right-handed neutrinos via a gauge invariant mass term to left-handed A particles. The natural scale of these gauge invariant masses is of the order of 10(17) GeV while the Dirac masses of the neutrino and the AC particles are generated dynamically and are therefore much smaller (similar to 1 to similar to 10(6) GeV). From this configuration, a working inverse seesaw mechanism for the neutrinos is obtained.
This paper provides a complete list of Krajewski diagrams representing the standard model of particle physics. We will give the possible representations of the algebra and the anomaly free lifts which provide the representation of the standard model gauge group on the fermionic Hilbert space. The algebra representations following from the Krajewski diagrams are not complete in the sense that the corresponding spectral triples do not necessarily obey to the axiom of Poincare duality. This defect may be repaired by adding new particles to the model, i.e., by building models beyond the standard model. The aim of this list of finite spectral triples (up to Poincare duality) is therefore to provide a basis for model building beyond the standard model.
In this publication we present an extension of the standard model within the framework of Connes' noncommutative geometry. The model presented here is based on a minimal spectral triple which contains the standard model particles, new vectorlike fermions, and a new U(1) gauge subgroup. Additionally a new complex scalar field appears that couples to the right-handed neutrino, the new fermions, and the standard Higgs particle. The bosonic part of the action is given by the spectral action which also determines relations among the gauge couplings, the quartic scalar couplings, and the Yukawa couplings at a cutoff energy of similar to 10(17) GeV. We investigate the renormalization group flow of these relations. The low energy behavior allows to constrain the Higgs mass, the mass of the new scalar, and the mixing between these two scalar fields.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Prospective and retrospective evaluation of five-year earthquake forecast models for California
(2017)
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node.
On Particular n-Clones
(2013)
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
We show a Lefschetz fixed point formula for holomorphic functions in a bounded domain D with smooth boundary in the complex plane. To introduce the Lefschetz number for a holomorphic map of D, we make use of the Bergman kernel of this domain. The Lefschetz number is proved to be the sum of the usual contributions of fixed points of the map in D and contributions of boundary fixed points, these latter being different for attracting and repulsing fixed points
Anisotropic edge problems
(2002)
Anisotropic edge problems
(2002)
We investigate elliptic pseudodifferential operators which degenerate in an anisotropic way on a submanifold of arbitrary codimension. To find Fredholm problems for such operators we adjoint to them boundary and coboundary conditions on the submanifold.The algebra obtained this way is a far reaching generalisation of Boutet de Monvel's algebra of boundary value problems with transmission property. We construct left and right regularisers and prove theorems on hypoellipticity and local solvability.
We study the Neumann problem for the de Rham complex in a bounded domain of Rn with singularities on the boundary. The singularities may be general enough, varying from Lipschitz domains to domains with cuspidal edges on the boundary. Following Lopatinskii we reduce the Neumann problem to a singular integral equation of the boundary. The Fredholm solvability of this equation is then equivalent to the Fredholm property of the Neumann problem in suitable function spaces. The boundary integral equation is explicitly written and may be treated in diverse methods. This way we obtain, in particular, asymptotic expansions of harmonic forms near singularities of the boundary.
By quasicomplexes are usually meant perturbations of complexes small in some sense. Of interest are not only perturbations within the category of complexes but also those going beyond this category. A sequence perturbed in this way is no longer a complex, and so it bears no cohomology. We show how to introduce Euler characteristic for small perturbations of Fredholm complexes. The paper is to appear in Funct. Anal. and its Appl., 2006.
We define the Dirichlet to Neumann operator for an elliptic complex of first order differential operators on a compact Riemannian manifold with boundary. Under reasonable conditions the Betti numbers of the complex prove to be completely determined by the Dirichlet to Neumann operator on the boundary.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
We describe a natural construction of deformation quantization on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
We consider a boundary value problem for an elliptic differential operator of order 2m in a domain D ⊂ n. The boundary of D is smooth outside a finite number of conical points, and the Lopatinskii condition is fulfilled on the smooth part of δD. The corresponding spaces are weighted Sobolev spaces H(up s,Υ)(D), and this allows one to define ellipticity of weight Υ for the problem. The resolvent of the problem is assumed to possess rays of minimal growth. The main result says that if there are rays of minimal growth with angles between neighbouring rays not exceeding π(Υ + 2m)/n, then the root functions of the problem are complete in L²(D). In the case of second order elliptic equations the results remain true for all domains with Lipschitz boundary.
We show a Lefschetz fixed point formula for holomorphic functions in a bounded domain D with smooth boundary in the complex plane. To introduce the Lefschetz number for a holomorphic map of D, we make use of the Bergman kernal of this domain. The Lefschetz number is proved to be the sum of usual contributions of fixed points of the map in D and contributions of boundary fixed points, these latter being different for attracting and repulsing fixed points.
In order to characterise the C*-algebra generated by the singular Bochner-Martinelli integral over a smooth closed hypersurfaces in Cn, we compute its principal symbol. We show then that the Szegö projection belongs to the strong closure of the algebra generated by the singular Bochner-Martinelli integral.
For a sequence of Hilbert spaces and continuous linear operators the curvature is defined to be the composition of any two consecutive operators. This is modeled on the de Rham resolution of a connection on a module over an algebra. Of particular interest are those sequences for which the curvature is "small" at each step, e.g., belongs to a fixed operator ideal. In this context we elaborate the theory of Fredholm sequences and show how to introduce the Lefschetz number.
We consider a mixed problem for a degenerate differentialoperator equation of higher order. We establish some embedding theorems in weighted Sobolev spaces and show existence and uniqueness of the generalized solution of this problem. We also give a description of the spectrum for the corresponding operator.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
The overall program "arborescent numbers" is to similarly perform the constructions from the natural numbers (N) to the positive fractional numbers (Q+) to positive real numbers (R+) beginning with (specific) binary trees instead of natural numbers. N can be regarded as the associative binary trees. The binary trees B and the left-commutative binary trees P allow the hassle-free definition of arbitrary high arithmetic operations (hyper ... hyperpowers). To construct the division trees the algebraic structure "coppice" is introduced which is a group with an addition over which the multiplication is right-distributive. Q+ is the initial associative coppice. The present work accomplishes one step in the program "arborescent numbers". That is the construction of the arborescent equivalent(s) of the positive fractional numbers. These equivalents are the "division binary trees" and the "fractional trees". A representation with decidable word problem for each of them is given. The set of functions f:R1->R1 generated from identity by taking powers is isomorphic to P and can be embedded into a coppice by taking inverses.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action (-log[P]). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s(tn);tn=t0+n Delta t and, using methods of nonlinear time series analysis, show how to produce a DE>1-dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s(tn) time series. In that DE-dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on DE-dimensional input and producing DE-dimensional outputs.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
Nonlinear data assimilation
(2015)
This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters.
The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
The human immunodeficiency virus (HIV) can be suppressed by highly active anti-retroviral therapy (HAART) in the majority of infected patients. Nevertheless, treatment interruptions inevitably result in viral rebounds from persistent, latently infected cells, necessitating lifelong treatment. Virological failure due to resistance development is a frequent event and the major threat to treatment success. Currently, it is recommended to change treatment after the confirmation of virological failure. However, at the moment virological failure is detected, drug resistant mutants already replicate in great numbers. They infect numerous cells, many of which will turn into latently infected cells. This pool of cells represents an archive of resistance, which has the potential of limiting future treatment options. The objective of this study was to design a treatment strategy for treatment-naive patients that decreases the likelihood of early treatment failure and preserves future treatment options. We propose to apply a single, pro-active treatment switch, following a period of treatment with an induction regimen. The main goal of the induction regimen is to decrease the abundance of randomly generated mutants that confer resistance to the maintenance regimen, thereby increasing subsequent treatment success. Treatment is switched before the overgrowth and archiving of mutant strains that carry resistance against the induction regimen and would limit its future re-use. In silico modelling shows that an optimal trade-off is achieved by switching treatment at & 80 days after the initiation of antiviral therapy. Evaluation of the proposed treatment strategy demonstrated significant improvements in terms of resistance archiving and virological response, as compared to conventional HAART. While continuous pro-active treatment alternation improved the clinical outcome in a randomized trial, our results indicate that a similar improvement might also be reached after a single pro-active treatment switch. The clinical validity of this finding, however, remains to be shown by a corresponding trial.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
We consider quasicomplexes of pseudodifferential operators on a smooth compact manifold without boundary. To each quasicomplex we associate a complex of symbols. The quasicomplex is elliptic if this symbol complex is exact away from the zero section. We prove that elliptic quasicomplexes are Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes and prove a generalisation of the Atiyah-Singer index theorem.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott.
In a recent paper with N. Tarkhanov, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
The drug concentrations targeted in meropenem and piperacillin/tazobactam therapy also depend on the susceptibility of the pathogen. Yet, the pathogen is often unknown, and antibiotic therapy is guided by empirical targets. To reliably achieve the targeted concentrations, dosing needs to be adjusted for renal function. We aimed to evaluate a meropenem and piperacillin/tazobactam monitoring program in intensive care unit (ICU) patients by assessing (i) the adequacy of locally selected empirical targets, (ii) if dosing is adequately adjusted for renal function and individual target, and (iii) if dosing is adjusted in target attainment (TA) failure. In a prospective, observational clinical trial of drug concentrations, relevant patient characteristics and microbiological data (pathogen, minimum inhibitory concentration (MIC)) for patients receiving meropenem or piperacillin/tazobactam treatment were collected. If the MIC value was available, a target range of 1-5 x MIC was selected for minimum drug concentrations of both drugs. If the MIC value was not available, 8-40 mg/L and 16-80 mg/L were selected as empirical target ranges for meropenem and piperacillin, respectively. A total of 356 meropenem and 216 piperacillin samples were collected from 108 and 96 ICU patients, respectively. The vast majority of observed MIC values was lower than the empirical target (meropenem: 90.0%, piperacillin: 93.9%), suggesting empirical target value reductions. TA was found to be low (meropenem: 35.7%, piperacillin 50.5%) with the lowest TA for severely impaired renal function (meropenem: 13.9%, piperacillin: 29.2%), and observed drug concentrations did not significantly differ between patients with different targets, indicating dosing was not adequately adjusted for renal function or target. Dosing adjustments were rare for both drugs (meropenem: 6.13%, piperacillin: 4.78%) and for meropenem irrespective of TA, revealing that concentration monitoring alone was insufficient to guide dosing adjustment. Empirical targets should regularly be assessed and adjusted based on local susceptibility data. To improve TA, scientific knowledge should be translated into easy-to-use dosing strategies guiding antibiotic dosing.
We propose a novel strategy for global sensitivity analysis of ordinary differential equations. It is based on an error-controlled solution of the partial differential equation (PDE) that describes the evolution of the probability density function associated with the input uncertainty/variability. The density yields a more accurate estimate of the output uncertainty/variability, where not only some observables (such as mean and variance) but also structural properties (e.g., skewness, heavy tails, bi-modality) can be resolved up to a selected accuracy. For the adaptive solution of the PDE Cauchy problem we use the Rothe method with multiplicative error correction, which was originally developed for the solution of parabolic PDEs. We show that, unlike in parabolic problems, conservation properties necessitate a coupling of temporal and spatial accuracy to avoid accumulation of spatial approximation errors over time. We provide convergence conditions for the numerical scheme and suggest an implementation using approximate approximations for spatial discretization to efficiently resolve the coupling of temporal and spatial accuracy. The performance of the method is studied by means of low-dimensional case studies. The favorable properties of the spatial discretization technique suggest that this may be the starting point for an error-controlled sensitivity analysis in higher dimensions.
As a potentially toxic agent on nervous system and bone, the safety of aluminium exposure from adjuvants in vaccines and subcutaneous immune therapy (SCIT) products has to be continuously reevaluated, especially regarding concomitant administrations. For this purpose, knowledge on absorption and disposition of aluminium in plasma and tissues is essential. Pharmacokinetic data after vaccination in humans, however, are not available, and for methodological and ethical reasons difficult to obtain. To overcome these limitations, we discuss the possibility of an in vitro-in silico approach combining a toxicokinetic model for aluminium disposition with biorelevant kinetic absorption parameters from adjuvants. We critically review available kinetic aluminium-26 data for model building and, on the basis of a reparameterized toxicokinetic model (Nolte et al., 2001), we identify main modelling gaps. The potential of in vitro dissolution experiments for the prediction of intramuscular absorption kinetics of aluminium after vaccination is explored. It becomes apparent that there is need for detailed in vitro dissolution and in vivo absorption data to establish an in vitro-in vivo correlation (IVIVC) for aluminium adjuvants. We conclude that a combination of new experimental data and further refinement of the Nolte model has the potential to fill a gap in aluminium risk assessment. (C) 2017 Elsevier Inc. All rights reserved.