Refine
Year of publication
Document Type
- Article (1061)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (150)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (8)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1855)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2131) (remove)
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
Nonlinear data assimilation
(2015)
This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters.
The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
Tasking machine learning to predict segments of a time series requires estimating the parameters of a ML model with input/output pairs from the time series. We borrow two techniques used in statistical data assimilation in order to accomplish this task: time-delay embedding to prepare our input data and precision annealing as a training method. The precision annealing approach identifies the global minimum of the action (-log[P]). In this way, we are able to identify the number of training pairs required to produce good generalizations (predictions) for the time series. We proceed from a scalar time series s(tn);tn=t0+n Delta t and, using methods of nonlinear time series analysis, show how to produce a DE>1-dimensional time-delay embedding space in which the time series has no false neighbors as does the observed s(tn) time series. In that DE-dimensional space, we explore the use of feedforward multilayer perceptrons as network models operating on DE-dimensional input and producing DE-dimensional outputs.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
The overall program "arborescent numbers" is to similarly perform the constructions from the natural numbers (N) to the positive fractional numbers (Q+) to positive real numbers (R+) beginning with (specific) binary trees instead of natural numbers. N can be regarded as the associative binary trees. The binary trees B and the left-commutative binary trees P allow the hassle-free definition of arbitrary high arithmetic operations (hyper ... hyperpowers). To construct the division trees the algebraic structure "coppice" is introduced which is a group with an addition over which the multiplication is right-distributive. Q+ is the initial associative coppice. The present work accomplishes one step in the program "arborescent numbers". That is the construction of the arborescent equivalent(s) of the positive fractional numbers. These equivalents are the "division binary trees" and the "fractional trees". A representation with decidable word problem for each of them is given. The set of functions f:R1->R1 generated from identity by taking powers is isomorphic to P and can be embedded into a coppice by taking inverses.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
We consider a mixed problem for a degenerate differentialoperator equation of higher order. We establish some embedding theorems in weighted Sobolev spaces and show existence and uniqueness of the generalized solution of this problem. We also give a description of the spectrum for the corresponding operator.
For a sequence of Hilbert spaces and continuous linear operators the curvature is defined to be the composition of any two consecutive operators. This is modeled on the de Rham resolution of a connection on a module over an algebra. Of particular interest are those sequences for which the curvature is "small" at each step, e.g., belongs to a fixed operator ideal. In this context we elaborate the theory of Fredholm sequences and show how to introduce the Lefschetz number.
In order to characterise the C*-algebra generated by the singular Bochner-Martinelli integral over a smooth closed hypersurfaces in Cn, we compute its principal symbol. We show then that the Szegö projection belongs to the strong closure of the algebra generated by the singular Bochner-Martinelli integral.
We show a Lefschetz fixed point formula for holomorphic functions in a bounded domain D with smooth boundary in the complex plane. To introduce the Lefschetz number for a holomorphic map of D, we make use of the Bergman kernel of this domain. The Lefschetz number is proved to be the sum of the usual contributions of fixed points of the map in D and contributions of boundary fixed points, these latter being different for attracting and repulsing fixed points
Anisotropic edge problems
(2002)
We define the Dirichlet to Neumann operator for an elliptic complex of first order differential operators on a compact Riemannian manifold with boundary. Under reasonable conditions the Betti numbers of the complex prove to be completely determined by the Dirichlet to Neumann operator on the boundary.
The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments.
We describe a natural construction of deformation quantization on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
We consider a boundary value problem for an elliptic differential operator of order 2m in a domain D ⊂ n. The boundary of D is smooth outside a finite number of conical points, and the Lopatinskii condition is fulfilled on the smooth part of δD. The corresponding spaces are weighted Sobolev spaces H(up s,Υ)(D), and this allows one to define ellipticity of weight Υ for the problem. The resolvent of the problem is assumed to possess rays of minimal growth. The main result says that if there are rays of minimal growth with angles between neighbouring rays not exceeding π(Υ + 2m)/n, then the root functions of the problem are complete in L²(D). In the case of second order elliptic equations the results remain true for all domains with Lipschitz boundary.
We show a Lefschetz fixed point formula for holomorphic functions in a bounded domain D with smooth boundary in the complex plane. To introduce the Lefschetz number for a holomorphic map of D, we make use of the Bergman kernal of this domain. The Lefschetz number is proved to be the sum of usual contributions of fixed points of the map in D and contributions of boundary fixed points, these latter being different for attracting and repulsing fixed points.
Anisotropic edge problems
(2002)
We investigate elliptic pseudodifferential operators which degenerate in an anisotropic way on a submanifold of arbitrary codimension. To find Fredholm problems for such operators we adjoint to them boundary and coboundary conditions on the submanifold.The algebra obtained this way is a far reaching generalisation of Boutet de Monvel's algebra of boundary value problems with transmission property. We construct left and right regularisers and prove theorems on hypoellipticity and local solvability.
We study the Neumann problem for the de Rham complex in a bounded domain of Rn with singularities on the boundary. The singularities may be general enough, varying from Lipschitz domains to domains with cuspidal edges on the boundary. Following Lopatinskii we reduce the Neumann problem to a singular integral equation of the boundary. The Fredholm solvability of this equation is then equivalent to the Fredholm property of the Neumann problem in suitable function spaces. The boundary integral equation is explicitly written and may be treated in diverse methods. This way we obtain, in particular, asymptotic expansions of harmonic forms near singularities of the boundary.
By quasicomplexes are usually meant perturbations of complexes small in some sense. Of interest are not only perturbations within the category of complexes but also those going beyond this category. A sequence perturbed in this way is no longer a complex, and so it bears no cohomology. We show how to introduce Euler characteristic for small perturbations of Fredholm complexes. The paper is to appear in Funct. Anal. and its Appl., 2006.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
On Particular n-Clones
(2013)
The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node.
Empirische Untersuchungen von Lückentext-Items zur Beherrschung der Syntax einer Programmiersprache
(2018)
Lückentext-Items auf der Basis von Programmcode können eingesetzt werden, um Kenntnisse in der Syntax einer Programmiersprache zu prüfen, ohne dazu komplexe Programmieraufgaben zu stellen, deren Bearbeitung weitere Kompetenzen erfordert. Der vorliegende Beitrag dokumentiert den Einsatz von insgesamt zehn derartigen Items in einer universitären Erstsemestervorlesung zur Programmierung mit Java. Es werden sowohl Erfahrungen mit der Konstruktion der Items als auch empirische Daten aus dem Einsatz diskutiert. Der Beitrag zeigt dadurch insbesondere die Herausforderungen bei der Konstruktion valider Instrumente zur Kompetenzmessung in der Programmierausbildung auf. Die begrenzten und teilweise vorläufigen Ergebnisse zur Qualität der erzeugten Items legen trotzdem nahe, dass Erstellung und Einsatz entsprechender Items möglich ist und einen Beitrag zur Kompetenzmessung leisten kann.
Prospective and retrospective evaluation of five-year earthquake forecast models for California
(2017)
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
In this paper we will implement the inverse seesaw mechanism into the noncommutative framework on the basis of the AC extension of the standard model. The main difference from the classical AC model is the chiral nature of the AC fermions with respect to a U(1)(X) extension of the standard model gauge group. It is this extension which allows us to couple the right-handed neutrinos via a gauge invariant mass term to left-handed A particles. The natural scale of these gauge invariant masses is of the order of 10(17) GeV while the Dirac masses of the neutrino and the AC particles are generated dynamically and are therefore much smaller (similar to 1 to similar to 10(6) GeV). From this configuration, a working inverse seesaw mechanism for the neutrinos is obtained.
This paper provides a complete list of Krajewski diagrams representing the standard model of particle physics. We will give the possible representations of the algebra and the anomaly free lifts which provide the representation of the standard model gauge group on the fermionic Hilbert space. The algebra representations following from the Krajewski diagrams are not complete in the sense that the corresponding spectral triples do not necessarily obey to the axiom of Poincare duality. This defect may be repaired by adding new particles to the model, i.e., by building models beyond the standard model. The aim of this list of finite spectral triples (up to Poincare duality) is therefore to provide a basis for model building beyond the standard model.
In this publication we present an extension of the standard model within the framework of Connes' noncommutative geometry. The model presented here is based on a minimal spectral triple which contains the standard model particles, new vectorlike fermions, and a new U(1) gauge subgroup. Additionally a new complex scalar field appears that couples to the right-handed neutrino, the new fermions, and the standard Higgs particle. The bosonic part of the action is given by the spectral action which also determines relations among the gauge couplings, the quartic scalar couplings, and the Yukawa couplings at a cutoff energy of similar to 10(17) GeV. We investigate the renormalization group flow of these relations. The low energy behavior allows to constrain the Higgs mass, the mass of the new scalar, and the mixing between these two scalar fields.
We discuss the Cauchy problem for the so-called Chaplygin system which often appears in gas, aero- and hydrodynamics. This system can be thought of as a nonlinear analogue of the Cauchy-Riemann system in the plane. We pose Cauchy data on a part of the boundary and apply variational approach to construct a solution to this ill-posed problem. The problem actually gives insight to fundamental questions related to instable problems for nonlinear equations.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.
Ancient genomes have revolutionized our understanding of Holocene prehistory and, particularly, the Neolithic transition in western Eurasia. In contrast, East Asia has so far received little attention, despite representing a core region at which the Neolithic transition took place independently ~3 millennia after its onset in the Near East. We report genome-wide data from two hunter-gatherers from Devil’s Gate, an early Neolithic cave site (dated to ~7.7 thousand years ago) located in East Asia, on the border between Russia and Korea. Both of these individuals are genetically most similar to geographically close modern populations from the Amur Basin, all speaking Tungusic languages, and, in particular, to the Ulchi. The similarity to nearby modern populations and the low levels of additional genetic material in the Ulchi imply a high level of genetic continuity in this region during the Holocene, a pattern that markedly contrasts with that reported for Europe.
This survey on the theme of Geometry Education (including new technologies) focuses chiefly on the time span since 2008. Based on our review of the research literature published during this time span (in refereed journal articles, conference proceedings and edited books), we have jointly identified seven major threads of contributions that span from the early years of learning (pre-school and primary school) through to post-compulsory education and to the issue of mathematics teacher education for geometry. These threads are as follows: developments and trends in the use of theories; advances in the understanding of visuo spatial reasoning; the use and role of diagrams and gestures; advances in the understanding of the role of digital technologies; advances in the understanding of the teaching and learning of definitions; advances in the understanding of the teaching and learning of the proving process; and, moving beyond traditional Euclidean approaches. Within each theme, we identify relevant research and also offer commentary on future directions.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fuquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60 degrees N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
The paper deals with Sigma-composition and Sigma-essential composition of terms which lead to stable and s-stable varieties of algebras. A full description of all stable varieties of semigroups, commutative and idempotent groupoids is obtained. We use an abstract reduction system which simplifies the presentations of terms of type tau - (2) to study the variety of idempotent groupoids and s-stable varieties of groupoids. S-stable varieties are a variation of stable varieties, used to highlight replacement of subterms of a term in a deductive system instead of the usual replacement of variables by terms.
We consider Dyson-Schwinger Equations (DSEs) in the context of Connes-Kreimer renormalization Hopf algebra of Feynman diagrams and Connes-Marcolli universal Tannakian formalism. This study leads us to formulate a family of Picard-Fuchs equations and a category of Feynman motivic sheaves with respect to each combinatorial DSE.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
Formal poincare lemma
(2007)
Let X be a smooth n-dimensional manifold and D be an open connected set in X with smooth boundary OD. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Gamma subset of partial derivativeD, we obtain a family of mixed problems depending on a small parameter epsilon > 0. Although the mixed problems are subjected to a noncoercive boundary condition on partial derivativeDF in general, each of them is uniquely solvable in an appropriate Hilbert space D-T and the corresponding family {u(epsilon)} of solutions approximates the solution of the Cauchy problem in D-T whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in D-T is equivalent to the boundedness of the family {u(epsilon)}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing the solution. Examples for Dirac operators in the Euclidean space R-n are treated. In this case, we obtain a family of mixed boundary problems for the Helmholtz equation
Let X be a smooth n -dimensional manifold and D be an open connected set in X with smooth boundary ∂D. Perturbing the Cauchy problem for an elliptic system Au = f in D with data on a closed set Γ ⊂ ∂D we obtain a family of mixed problems depending on a small parameter ε > 0. Although the mixed problems are subject to a non-coercive boundary condition on ∂D\Γ in general, each of them is uniquely solvable in an appropriate Hilbert space DT and the corresponding family {uε} of solutions approximates the solution of the Cauchy problem in DT whenever the solution exists. We also prove that the existence of a solution to the Cauchy problem in DT is equivalent to the boundedness of the family {uε}. We thus derive a solvability condition for the Cauchy problem and an effective method of constructing its solution. Examples for Dirac operators in the Euclidean space Rn are considered. In the latter case we obtain a family of mixed boundary problems for the Helmholtz equation.
This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2013)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R-n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on partial derivative D. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact selfadjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types. (C) 2013 Elsevier Inc. All rights reserved.
Let A be a determined or overdetermined elliptic differential operator on a smooth compact manifold X. Write Ssub(A)(D) for the space of solutions to thesystem Au = 0 in a domain D ⊂ X. Using reproducing kernels related to various Hilbert structures on subspaces of Ssub(A)(D) we show explicit identifications of the dual spaces. To prove the "regularity" of reproducing kernels up to the boundary of D we specify them as resolution operators of abstract Neumann problems. The matter thus reduces to a regularity theorem for the Neumann problem, a well-known example being the ∂-Neumann problem. The duality itself takes place only for those domains D which possess certain convexity properties with respect to A.
Formal Poincaré lemma
(2007)
We show how the multiple application of the formal Cauchy-Kovalevskaya theorem leads to the main result of the formal theory of overdetermined systems of partial differential equations. Namely, any sufficiently regular system Au = f with smooth coefficients on an open set U ⊂ Rn admits a solution in smooth sections of a bundle of formal power series, provided that f satisfies a compatibility condition in U.
We consider a (generally, non-coercive) mixed boundary value problem in a bounded domain for a second order elliptic differential operator A. The differential operator is assumed to be of divergent form and the boundary operator B is of Robin type. The boundary is assumed to be a Lipschitz surface. Besides, we distinguish a closed subset of the boundary and control the growth of solutions near this set. We prove that the pair (A,B) induces a Fredholm operator L in suitable weighted spaces of Sobolev type, the weight function being a power of the distance to the singular set. Moreover, we prove the completeness of root functions related to L.
On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators
(2012)
We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types.
We consider an initial problem for the Navier-Stokes type equations associated with the de Rham complex over R-n x[0, T], n >= 3, with a positive time T. We prove that the problem induces an open injective mappings on the scales of specially constructed function spaces of Bochner-Sobolev type. In particular, the corresponding statement on the intersection of these classes gives an open mapping theorem for smooth solutions to the Navier-Stokes equations.
Let Hsub(0), Hsub(1) be Hilbert spaces and L : Hsub(0) -> Hsub(1) be a linear bounded operator with ||L|| ≤ 1. Then L*L is a bounded linear self-adjoint non-negative operator in the Hilbert space Hsub(0) and one can use the Neumann series ∑∞sub(v=0)(I - L*L)v L*f in order to study solvability of the operator equation Lu = f. In particular, applying this method to the ill-posed Cauchy problem for solutions to an elliptic system Pu = 0 of linear PDE's of order p with smooth coefficients we obtain solvability conditions and representation formulae for solutions of the problem in Hardy spaces whenever these solutions exist. For the Cauchy-Riemann system in C the summands of the Neumann series are iterations of the Cauchy type integral. We also obtain similar results 1) for the equation Pu = f in Sobolev spaces, 2) for the Dirichlet problem and 3) for the Neumann problem related to operator P*P if P is a homogeneous first order operator and its coefficients are constant. In these cases the representations involve sums of series whose terms are iterations of integro-differential operators, while the solvability conditions consist of convergence of the series together with trivial necessary conditions.
We prove the existence of Hp(D)-limit of iterations of double layer potentials constructed with the use of Hodge parametrix on a smooth compact manifold X, D being an open connected subset of X. This limit gives us an orthogonal projection from Sobolev space Hp(D) to a closed subspace of Hp(D)-solutions of an elliptic operator P of order p ≥ 1. Using this result we obtain formulae for Sobolev solutions to the equation Pu = f in D whenever these solutions exist. This representation involves the sum of a series whose terms are iterations of double layer potentials. Similar regularization is constructed also for a P-Neumann problem in D.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
We propose a conversion method from alarm-based to rate-based earthquake forecast models. A differential probability gain g(alarm)(ref) is the absolute value of the local slope of the Molchan trajectory that evaluates the performance of the alarm-based model with respect to the chosen reference model. We consider that this differential probability gain is constant over time. Its value at each point of the testing region depends only on the alarm function value. The rate-based model is the product of the event rate of the reference model at this point multiplied by the corresponding differential probability gain. Thus, we increase or decrease the initial rates of the reference model according to the additional amount of information contained in the alarm-based model. Here, we apply this method to the Early Aftershock STatistics (EAST) model, an alarm-based model in which early aftershocks are used to identify space-time regions with a higher level of stress and, consequently, a higher seismogenic potential. The resulting rate-based model shows similar performance to the original alarm-based model for all ranges of earthquake magnitude in both retrospective and prospective tests. This conversion method offers the opportunity to perform all the standard evaluation tests of the earthquake testing centers on alarm-based models. In addition, we infer that it can also be used to consecutively combine independent forecast models and, with small modifications, seismic hazard maps with short- and medium-term forecasts.
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
The majority of earthquakes occur unexpectedly and can trigger subsequent sequences of events that can culminate in more powerful earthquakes. This self-exciting nature of seismicity generates complex clustering of earthquakes in space and time. Therefore, the problem of constraining the magnitude of the largest expected earthquake during a future time interval is of critical importance in mitigating earthquake hazard. We address this problem by developing a methodology to compute the probabilities for such extreme earthquakes to be above certain magnitudes. We combine the Bayesian methods with the extreme value theory and assume that the occurrence of earthquakes can be described by the Epidemic Type Aftershock Sequence process. We analyze in detail the application of this methodology to the 2016 Kumamoto, Japan, earthquake sequence. We are able to estimate retrospectively the probabilities of having large subsequent earthquakes during several stages of the evolution of this sequence.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
Mental arithmetic is characterised by a tendency to overestimate addition and to underestimate subtraction results: the operational momentum (OM) effect. Here, motivated by contentious explanations of this effect, we developed and tested an arithmetic heuristics and biases model that predicts reverse OM due to cognitive anchoring effects. Participants produced bi-directional lines with lengths corresponding to the results of arithmetic problems. In two experiments, we found regular OM with zero problems (e.g., 3+0, 3-0) but reverse OM with non-zero problems (e.g., 2+1, 4-1). In a third experiment, we tested the prediction of our model. Our results suggest the presence of at least three competing biases in mental arithmetic: a more-or-less heuristic, a sign-space association and an anchoring bias. We conclude that mental arithmetic exhibits shortcuts for decision-making similar to traditional domains of reasoning and problem-solving.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Um für ein Leben in der digitalen Gesellschaft vorbereitet zu sein, braucht jeder heute in verschiedenen Situationen umfangreiche informatische Grundlagen. Die Bedeutung von Informatik nimmt nicht nur in immer mehr
Bereichen unseres täglichen Lebens zu, sondern auch in immer mehr Ausbildungsrichtungen. Um junge Menschen auf ihr zukünftiges Leben und/oder ihre zukünftige berufliche Tätigkeit vorzubereiten, bieten verschiedene Hochschulen Informatikmodule für Studierende anderer Fachrichtungen an. Die Materialien jener Kurse bilden einen umfangreichen Datenpool, um die für Studierende anderer Fächer bedeutenden Aspekte der Informatik mithilfe eines empirischen Ansatzes zu identifizieren. Im Folgenden werden 70 Module zu informatischer Bildung für Studierende anderer Fachrichtungen analysiert. Die Materialien – Publikationen, Syllabi und Stundentafeln – werden zunächst mit einer qualitativen Inhaltsanalyse nach Mayring untersucht und anschließend quantitativ ausgewertet. Basierend auf der Analyse werden Ziele, zentrale Themen und Typen eingesetzter Werkzeuge identifiziert.
The ellipticity of boundary value problems on a smooth manifold with boundary relies on a two-component principal symbolic structure (sigma(psi), sigma(partial derivative)), consisting of interior and boundary symbols. In the case of a smooth edge on manifolds with boundary, we have a third symbolic component, namely, the edge symbol sigma(boolean AND), referring to extra conditions on the edge, analogously as boundary conditions. Apart from such conditions 'in integral form' there may exist singular trace conditions, investigated in Kapanadze et al., Internal Equations and Operator Theory, 61, 241-279, 2008 on 'closed' manifolds with edge. Here, we concentrate on the phenomena in combination with boundary conditions and edge problem.
We establish a quantisation of corner-degenerate symbols, here called Mellin-edge quantisation, on a manifold with second order singularities. The typical ingredients come from the "most singular" stratum of which is a second order edge where the infinite transversal cone has a base that is itself a manifold with smooth edge. The resulting operator-valued amplitude functions on the second order edge are formulated purely in terms of Mellin symbols taking values in the edge algebra over . In this respect our result is formally analogous to a quantisation rule of (Osaka J. Math. 37:221-260, 2000) for the simpler case of edge-degenerate symbols that corresponds to the singularity order 1. However, from the singularity order 2 on there appear new substantial difficulties for the first time, partly caused by the edge singularities of the cone over that tend to infinity.