### Refine

#### Year of publication

#### Document Type

- Article (617) (remove)

#### Language

- English (617) (remove)

#### Keywords

#### Institute

- Institut für Mathematik (617) (remove)

Ausgehend von der typischen IT‐Infrastruktur für E‐Learning an Hochschulen auf der einen Seite sowie vom bisherigen Stand der Forschung zu Personal Learning Environments (PLEs) auf der anderen Seite zeigt dieser Beitrag auf, wie bestehende Werkzeuge bzw. Dienste zusammengeführt und für die Anforderungen der modernen, rechnergestützten Präsenzlehre aufbereitet werden können. Für diesen interdisziplinären Entwicklungsprozess bieten sowohl klassische Softwareentwicklungsverfahren als auch bestehende PLE‐Modelle wenig Hilfestellung an. Der Beitrag beschreibt die in einem campusweiten Projekt an der Universität Potsdam verfolgten Ansätze und die damit erzielten Ergebnisse. Dafür werden zunächst typische Lehr‐/Lern‐bzw. Kommunikations‐Szenarien identifiziert, aus denen Anforderungen an eine unterstützende Plattform abgeleitet werden. Dies führt zu einer umfassenden Sammlung zu berücksichtigender Dienste und deren Funktionen, die gemäß den Spezifika ihrer Nutzung in ein Gesamtsystem zu integrieren sind. Auf dieser Basis werden grundsätzliche Integrationsansätze und technische Details dieses Mash‐Ups in einer Gesamtschau aller relevanten Dienste betrachtet und in eine integrierende Systemarchitektur überführt. Deren konkrete Realisierung mit Hilfe der Portal‐Technologie Liferay wird dargestellt, wobei die eingangs definierten Szenarien aufgegriffen und exemplarisch vorgestellt werden. Ergänzende Anpassungen im Sinne einer personalisierbaren bzw. adaptiven Lern‐(und Arbeits‐)Umgebung werden ebenfalls unterstützt und kurz aufgezeigt.

Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.

This article studies the dynamics of the strong solution of a SDE driven by a discontinuous Levy process taking values in a smooth foliated manifold with compact leaves. It is assumed that it is foliated in the sense that its trajectories stay on the leaf of their initial value for all times almost surely. Under a generic ergodicity assumption for each leaf, we determine the effective behaviour of the system subject to a small smooth perturbation of order epsilon > 0, which acts transversal to the leaves. The main result states that, on average, the transversal component of the perturbed SDE converges uniformly to the solution of a deterministic ODE as e tends to zero. This transversal ODE is generated by the average of the perturbing vector field with respect to the invariant measures of the unperturbed system and varies with the transversal height of the leaves. We give upper bounds for the rates of convergence and illustrate these results for the random rotations on the circle. This article complements the results by Gonzales and Ruffino for SDEs of Stratonovich type to general Levy driven SDEs of Marcus type.

Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

Certain curvature conditions for the stability of Einstein manifolds with respect to the Einstein-Hilbert action are given. These conditions are given in terms of quantities involving the Weyl tensor and the Bochner tensor. In dimension six, a stability criterion involving the Euler characteristic is given.

We study the possibility of obtaining a computational turbulence model by means of non-dissipative regularisation of the compressible atmospheric equations for climate-type applications. We use an -regularisation (Lagrangian averaging) of the atmospheric equations. For the hydrostatic and compressible atmospheric equations discretised using a finite volume method on unstructured grids, deterministic and non-deterministic numerical experiments are conducted to compare the individual solutions and the statistics of the regularised equations to those of the original model. The impact of the regularisation parameter is investigated. Our results confirm the principal compatibility of -regularisation with atmospheric dynamics and encourage further investigations within atmospheric model including complex physical parametrisations.

We find necessary conditions for a second order ordinary differential equation to be equivalent to the Painleve III equation under a general point transformation. Their sufficiency is established by reduction to known results for the equations of the form y ' = f (x, y). We consider separately the generic case and the case of reducibility to an autonomous equation. The results are illustrated by the primary resonance equation.

We study the global theory of linear wave equations for sections of vector bundles over globally hyperbolic Lorentz manifolds. We introduce spaces of finite energy sections and show well-posedness of the Cauchy problem in those spaces. These spaces depend in general on the choice of a time function but it turns out that certain spaces of finite energy solutions are independent of this choice and hence invariantly defined. We also show existence and uniqueness of solutions for the Goursat problem where one prescribes initial data on a characteristic partial Cauchy hypersurface. This extends classical results due to Hormander.

We compute explicitly, and without any extra regularity assumptions, the large time limit of the fibrewise heat operator for Bismut-Lott type superconnections in the L-2-setting. This is motivated by index theory on certain non-compact spaces (families of manifolds with cocompact group action) where the convergence of the heat operator at large time implies refined L-2-index formulas.
As applications, we prove a local L-2-index theorem for families of signature operators and an L-2-Bismut-Lott theorem, expressing the Becker-Gottlieb transfer of flat bundles in terms of Kamber-Tondeur classes. With slightly stronger regularity we obtain the respective refined versions: we construct L-2-eta forms and L-2-torsion forms as transgression forms.

The primary motivation for systematic bases in first principles electronic structure simulations is to derive physical and chemical properties of molecules and solids with predetermined accuracy. This requires a detailed understanding of the asymptotic behaviour of many-particle Coulomb systems near coalescence points of particles. Singular analysis provides a convenient framework to study the asymptotic behaviour of wavefunctions near these singularities. In the present work, we want to introduce the mathematical framework of singular analysis and discuss a novel asymptotic parametrix construction for Hamiltonians of many-particle Coulomb systems. This corresponds to the construction of an approximate inverse of a Hamiltonian operator with remainder given by a so-called Green operator. The Green operator encodes essential asymptotic information and we present as our main result an explicit asymptotic formula for this operator. First applications to many-particle models in quantum chemistry are presented in order to demonstrate the feasibility of our approach. The focus is on the asymptotic behaviour of ladder diagrams, which provide the dominant contribution to short-range correlation in coupled cluster theory. Furthermore, we discuss possible consequences of our asymptotic analysis with respect to adaptive wavelet approximation.

We consider the signal detection problem in the Gaussian design trace regression model with low rank alternative hypotheses. We derive the precise (Ingster-type) detection boundary for the Frobenius and the nuclear norm. We then apply these results to show that honest confidence sets for the unknown matrix parameter that adapt to all low rank sub-models in nuclear norm do not exist. This shows that recently obtained positive results in [5] for confidence sets in low rank recovery problems are essentially optimal.

For an irreducible continuous time Markov chain, we derive the distribution of the first passage time from a given state i to another given state j and the reversed passage time from j to i, each under the condition of no return to the starting point. When these two distributions are identical, we say that i and j are in time duality. We introduce a new condition called permuted balance that generalizes the concept of reversibility and provides sufficient criteria, based on the structure of the transition graph of the Markov chain. Illustrative examples are provided.

Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.

Green-hyperbolic operators are linear differential operators acting on sections of a vector bundle over a Lorentzian manifold which possess advanced and retarded Green's operators. The most prominent examples are wave operators and Dirac-type operators. This paper is devoted to a systematic study of this class of differential operators. For instance, we show that this class is closed under taking restrictions to suitable subregions of the manifold, under composition, under taking "square roots", and under the direct sum construction. Symmetric hyperbolic systems are studied in detail.

We study infinitesimal Einstein deformations on compact flat manifolds and on product manifolds. Moreover, we prove refinements of results by Koiso and Bourguignon which yield obstructions on the existence of infinitesimal Einstein deformations under certain curvature conditions. (C) 2014 Elsevier B.V. All rights reserved.

Transport molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance and requires several biochemical transformations, which are modeled as internal states of the motor. While moving along the rope, the motor can also detach and the walk is interrupted. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.

The regularity of solutions to elliptic equations on a manifold with singularities, say, an edge, can be formulated in terms of asymptotics in the distance variable r > 0 to the singularity. In simplest form such asymptotics turn to a meromorphic behaviour under applying the Mellin transform on the half-axis. Poles, multiplicity, and Laurent coefficients form a system of asymptotic data which depend on the specific operator. Moreover, these data may depend on the variable y along the edge. We then have y-dependent families of meromorphic functions with variable poles, jumping multiplicities and a discontinuous dependence of Laurent coefficients on y. We study here basic phenomena connected with such variable branching asymptotics, formulated in terms of variable continuous asymptotics with a y-wise discrete behaviour.

Asymptotic Solutions of the Dirichlet Problem for the Heat Equation at a Characteristic Point
(2015)

The Dirichlet problem for the heat equation in a bounded domain aS, a"e (n+1) is characteristic because there are boundary points at which the boundary touches a characteristic hyperplane t = c, where c is a constant. For the first time, necessary and sufficient conditions on the boundary guaranteeing that the solution is continuous up to the characteristic point were established by Petrovskii (1934) under the assumption that the Dirichlet data are continuous. The appearance of Petrovskii's paper was stimulated by the existing interest to the investigation of general boundary-value problems for parabolic equations in bounded domains. We contribute to the study of this problem by finding a formal solution of the Dirichlet problem for the heat equation in a neighborhood of a cuspidal characteristic boundary point and analyzing its asymptotic behavior.

The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.

We consider the volume- normalized Ricci flow close to compact shrinking Ricci solitons. We show that if a compact Ricci soliton (M, g) is a local maximum of Perelman's shrinker entropy, any normalized Ricci flowstarting close to it exists for all time and converges towards a Ricci soliton. If g is not a local maximum of the shrinker entropy, we showthat there exists a nontrivial normalized Ricci flow emerging from it. These theorems are analogues of results in the Ricci- flat and in the Einstein case (Haslhofer and Muller, arXiv:1301.3219, 2013; Kroncke, arXiv: 1312.2224, 2013).

Context. The theoretically studied impact of rapid rotation on stellar evolution needs to be compared with these results of high-resolution spectroscopy-velocimetry observations. Early-type stars present a perfect laboratory for these studies. The prototype A0 star Vega has been extensively monitored in recent years in spectropolarimetry. A weak surface magnetic field was detected, implying that there might be a (still undetected) structured surface. First indications of the presence of small amplitude stellar radial velocity variations have been reported recently, but the confirmation and in-depth study with the highly stabilized spectrograph SOPHIE/OHP was required.
Aims. The goal of this article is to present a thorough analysis of the line profile variations and associated estimators in the early-type standard star Vega (A0) in order to reveal potential activity tracers, exoplanet companions, and stellar oscillations.
Methods. Vega was monitored in quasi-continuous high-resolution echelle spectroscopy with the highly stabilized velocimeter SOPHIE/OHP. A total of 2588 high signal-to-noise spectra was obtained during 34.7 h on five nights (2 to 6 of August 2012) in high-resolution mode at R = 75 000 and covering the visible domain from 3895 6270 angstrom. For each reduced spectrum, least square deconvolved equivalent photospheric profiles were calculated with a T-eff = 9500 and log g = 4.0 spectral line mask. Several methods were applied to study the dynamic behaviour of the profile variations (evolution of radial velocity, bisectors, vspan, 2D profiles, amongst others).
Results. We present the discovery of a spotted stellar surface on an A-type standard star (Vega) with very faint spot amplitudes Delta F/Fc similar to 5 x 10(-4). A rotational modulation of spectral lines with a period of rotation P = 0.68 d has clearly been exhibited, unambiguously confirming the results of previous spectropolarimetric studies. Most of these brightness inhomogeneities seem to be located in lower equatorial latitudes. Either a very thin convective layer can be responsible for magnetic field generation at small amplitudes, or a new mechanism has to be invoked to explain the existence of activity tracing starspots. At this stage it is difficult to disentangle a rotational from a stellar pulsational origin for the existing higher frequency periodic variations.
Conclusions. This first strong evidence that standard A-type stars can show surface structures opens a new field of research and ask about a potential link with the recently discovered weak magnetic field discoveries in this category of stars.

We introduce the notion of coupling distances on the space of Levy measures in order to quantify rates of convergence towards a limiting Levy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Levy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Levy diffusions in terms of the coupling distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.

Boundary value problems on a manifold with smooth boundary are closely related to the edge calculus where the boundary plays the role of an edge. The problem of expressing parametrices of Shapiro-Lopatinskij elliptic boundary value problems for differential operators gives rise to pseudo-differential operators with the transmission property at the boundary. However, there are interesting pseudo-differential operators without the transmission property, for instance, the Dirichlet-to-Neumann operator. In this case the symbols become edge-degenerate under a suitable quantisation, cf. Chang et al. (J Pseudo-Differ Oper Appl 5(2014):69-155, 2014). If the boundary itself has singularities, e.g., conical points or edges, then the symbols are corner-degenerate. In the present paper we study elements of the corresponding corner pseudo-differential calculus.

In this study, we analyze acoustic emission (AE) data recorded at the Morsleben salt mine, Germany, to assess the catalog completeness, which plays an important role in any seismicity analysis. We introduce the new concept of a magnitude completeness interval consisting of a maximum magnitude of completeness (M-c(max)) in addition to the well-known minimum magnitude of completeness. This is required to describe the completeness of the catalog, both for the smallest events (for which the detection performance may be low) and for the largest ones (which may be missed because of sensors saturation). We suggest a method to compute the maximum magnitude of completeness and calculate it for a spatial grid based on (1) the prior estimation of saturation magnitude at each sensor, (2) the correction of the detection probability function at each sensor, including a drop in the detection performance when it saturates, and (3) the combination of detection probabilities of all sensors to obtain the network detection performance. The method is tested using about 130,000 AE events recorded in a period of five weeks, with sources confined within a small depth interval, and an example of the spatial distribution of M-c(max) is derived. The comparison between the spatial distribution of M-c(max) and of the maximum possible magnitude (M-max), which is here derived using a recently introduced Bayesian approach, indicates that M-max exceeds M-c(max) in some parts of the mine. This suggests that some large and important events may be missed in the catalog, which could lead to a bias in the hazard evaluation.

A Riemannian manifold is called geometrically formal if the wedge product of any two harmonic forms is again harmonic. We classify geometrically formal compact 4-manifolds with nonnegative sectional curvature. If the sectional curvature is strictly positive, the manifold must be homeomorphic to S-4 or diffeomorphic to CP2.
This conclusion stills holds true if the sectional curvature is strictly positive and we relax the condition of geometric formality to the requirement that the length of harmonic 2-forms is not too nonconstant. In particular, the Hopf conjecture on S-2 x S-2 holds in this class of manifolds.

In this paper a technique to obtain a first approximation for singular inverse Sturm-Liouville problems with a symmetrical potential is introduced. The singularity, as a result of unbounded domain (-infinity, infinity), is treated by considering numerically the asymptotic limit of the associated problem on a finite interval (-L, L). In spite of this treatment, the problem has still an ill-conditioned structure unlike the classical regular ones and needs regularization techniques. Direct computation of eigenvalues in iterative solution procedure is made by means of pseudospectral methods. A fairly detailed description of the numerical algorithm and its applications to specific examples are presented to illustrate the accuracy and convergence behaviour of the proposed approach.

Metastability in a class of hyperbolic dynamical systems perturbed by heavy-tailed Levy type noise
(2015)

We consider a finite dimensional deterministic dynamical system with finitely many local attractors K-iota, each of which supports a unique ergodic probability measure P-iota, perturbed by a multiplicative non-Gaussian heavy-tailed Levy noise of small intensity epsilon > 0. We show that the random system exhibits a metastable behavior: there exists a unique epsilon-dependent time scale on which the system reminds of a continuous time Markov chain on the set of the invariant measures P-iota. In particular our approach covers the case of dynamical systems of Morse-Smale type, whose attractors consist of points and limit cycles, perturbed by multiplicative alpha-stable Levy noise in the Ito, Stratonovich and Marcus sense. As examples we consider alpha-stable Levy perturbations of the Duffing equation and Pareto perturbations of a biochemical birhythmic system with two nested limit cycles.

Boundary value problems on a smooth manifold X with boundary have the structure of edge problems. Operators A are described in terms of a principal symbolic hierarchy, namely, according to the stratification of X, with the interior and the boundary We focus here on operators with and without the transmission property and establish a new relationship between boundary symbols and operators in the cone calculus transversal to the boundary.

By edge algebra we understand a pseudo-differential calculus on a manifold with edge. The operators have a two-component principal symbolic hierarchy which determines operators up to lower order terms. Those belong to a filtration of the corresponding operator spaces. We give a new characterisation of this structure, based on an alternative representation of edge amplitude functions only containing holomorphic edge-degenerate Mellin symbols.

We consider the semiclassical asymptotic expansion of the heat kernel coming from Witten's perturbation of the de Rham complex by a given function. For the index, one obtains a time-dependent integral formula which is evaluated by the method of stationary phase to derive the Poincare-Hopf theorem. We show how this method is related to approaches using the Thom form of Mathai and Quillen. Afterwards, we use a more general version of the stationary phase approximation in the case that the perturbing function has critical submanifolds to derive a degenerate version of the Poincare-Hopf theorem.

We study systematically the estimation of Earth's core angular momentum (CAM) variation between 1962.0 and 2008.0 by using core surface flow models derived from the recent geomagnetic field model C(3)FM2. Various flow models are derived by changing four parameters that control the least squares flow inversion. The parameters include the spherical harmonic (SH) truncation degree of the flow models and two Lagrange multipliers that control the weights of two additional constraints. The first constraint forces the energy spectrum of the flow solution to follow a power law l-p, where l is the SH degree and p is the fourth parameter. The second allows to modulate the solution continuously between the dynamical states of tangential geostrophy (TG) and tangential magnetostrophy (TM). The calculated CAM variations are examined in reference to two features of the observed length-of-day (LOD) variation, namely, its secular trend and 6year oscillation. We find flow models in either TG or TM state for which the estimated CAM trends agree with the LOD trend. It is necessary for TM models to have their flows dominate at planetary scales, whereas TG models should not be of this scale; otherwise, their CAM trends are too steep. These two distinct types of flow model appear to correspond to the separate regimes of previous numerical dynamos that are thought to be applicable to the Earth's core. The phase of the subdecadal CAM variation is coherently determined from flow models obtained with extensively varying inversion settings. Multiple sources of model ambiguity need to be allowed for in discussing whether these phase estimates properly represent that of Earth's CAM as an origin of the observed 6year LOD oscillation.

In this work we extract the microphysical properties of aerosols for a collection of measurement cases with low volume depolarization ratio originating from fire sources captured by the Raman lidar located at the National Institute of Optoelectronics (INOE) in Bucharest. Our algorithm was tested not only for pure smoke but also for mixed smoke and urban aerosols of variable age and growth. Applying a sensitivity analysis on initial parameter settings of our retrieval code was proved vital for producing semi-automatized retrievals with a hybrid regularization method developed at the Institute of Mathematics of Potsdam University. A direct quantitative comparison of the retrieved microphysical properties with measurements from a Compact Time of Flight Aerosol Mass Spectrometer (CToF-AMS) is used to validate our algorithm. Microphysical retrievals performed with sun photometer data are also used to explore our results. Focusing on the fine mode we observed remarkable similarities between the retrieved size distribution and the one measured by the AMS. More complicated atmospheric structures and the factor of absorption appear to depend more on particle radius being subject to variation. A good correlation was found between the aerosol effective radius and particle age, using the ratio of lidar ratios (LR: aerosol extinction to backscatter ratios) as an indicator for the latter. Finally, the dependence on relative humidity of aerosol effective radii measured on the ground and within the layers aloft show similar patterns. (C) 2015 Elsevier Inc. All rights reserved.

We present simulations of binary black-hole mergers in which, after the common outer horizon has formed, the marginally outer trapped surfaces (MOTSs) corresponding to the individual black holes continue to approach and eventually penetrate each other. This has very interesting consequences according to recent results in the theory of MOTSs. Uniqueness and stability theorems imply that two MOTSs which touch with a common outer normal must be identical. This suggests a possible dramatic consequence of the collision between a small and large black hole. If the penetration were to continue to completion, then the two MOTSs would have to coalesce, by some combination of the small one growing and the big one shrinking. Here we explore the relationship between theory and numerical simulations, in which a small black hole has halfway penetrated a large one.

We investigate nonlinear problems which appear as Euler-Lagrange equations for a variational problem. They include in particular variational boundary value problems for nonlinear elliptic equations studied by F. Browder in the 1960s. We establish a solvability criterion of such problems and elaborate an efficient orthogonal projection method for constructing approximate solutions.

In quantum mechanics the temporal decay of certain resonance states is associated with an effective time evolution e(-ith(kappa)), where h(.) is an analytic family of non-self-adjoint matrices. In general the corresponding resonance states do not decay exponentially in time. Using analytic perturbation theory, we derive asymptotic expansions for e(-ith(kappa)), simultaneously in the limits kappa -> 0 and t -> infinity, where the corrections with respect to pure exponential decay have uniform bounds in one complex variable kappa(2)t.
In the Appendix we briefly review analytic perturbation theory, replacing the classical reference to the 1920 book of Knopp [Funktionentheorie II, Anwendungen und Weiterfuhrung der allgemeinen Theorie, Sammlung Goschen, Vereinigung wissenschaftlicher Verleger Walter de Gruyter, 1920] and its terminology by standard modern references. This might be of independent interest.

We consider infinite-dimensional diffusions where the interaction between the coordinates has a finite extent both in space and time. In particular, it is not supposed to be smooth or Markov. The initial state of the system is Gibbs, given by a strong summable interaction. If the strongness of this initial interaction is lower than a suitable level, and if the dynamical interaction is bounded from above in a right way, we prove that the law of the diffusion at any time t is a Gibbs measure with absolutely summable interaction. The main tool is a cluster expansion in space uniformly in time of the Girsanov factor coming from the dynamics and exponential ergodicity of the free dynamics to an equilibrium product measure.

We study two notions of relative differential cohomology, using the model of differential characters. The two notions arise from the two options to construct relative homology, either by cycles of a quotient complex or of a mapping cone complex. We discuss the relation of the two notions of relative differential cohomology to each other. We discuss long exact sequences for both notions, thereby clarifying their relation to absolute differential cohomology. We construct the external and internal product of relative and absolute characters and show that relative differential cohomology is a right module over the absolute differential cohomology ring. Finally we construct fiber integration and transgression for relative differential characters.

We study Cheeger-Simons differential characters and provide geometric descriptions of the ring structure and of the fiber integration map. The uniqueness of differential cohomology (up to unique natural transformation) is proved by deriving an explicit formula for any natural transformation between a differential cohomology theory and the model given by differential characters. Fiber integration for fibers with boundary is treated in the context of relative differential characters. As applications we treat higher-dimensional holonomy, parallel transport, and transgression.

We consider a finite-dimensional deterministic dynamical system with the global attractor ? which supports a unique ergodic probability measure P. The measure P can be considered as the uniform long-term mean of the trajectories staying in a bounded domain D containing ?. We perturb the dynamical system by a multiplicative heavy tailed Levy noise of small intensity E>0 and solve the asymptotic first exit time and location problem from D in the limit of E?0. In contrast to the case of Gaussian perturbations, the exit time has an algebraic exit rate as a function of E, just as in the case when ? is a stable fixed point studied earlier in [9, 14, 19, 26]. As an example, we study the first exit problem from a neighborhood of the stable limit cycle for the Van der Pol oscillator perturbed by multiplicative -stable Levy noise.

The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.

In this paper a linear-time algorithm for the minimization of acyclic deterministic finite-state automata is presented. The algorithm runs significantly faster than previous algorithms for the same task. This is shown by a comparison of the running times of both algorithms. Additionally, a variation of the new algorithm is presented which handles cyclic automata as input. The new cycle-aware algorithm minimizes acyclic automata in the desired way. In case of cyclic input, the algorithm minimizes all acyclic suffixes of the input automaton.

We consider statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses, under the assumption that a suitable single test (and corresponding p-value) is known for each individual hypothesis. We extend to this setting the notion of false discovery rate (FDR) as a measure of type I error. Our main result studies specific procedures based on the observation of the p-value process. Control of the FDR at a nominal level is ensured either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting. Its interest is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables.

We introduce the concept of a conical zeta value as a geometric generalization of a multiple zeta value in the context of convex cones. The quasi-shuffle and shuffle relations of multiple zeta values are generalized to open cone subdivision and closed cone subdivision relations respectively for conical zeta values. In order to achieve the closed cone subdivision relation, we also interpret linear relations among fractions as subdivisions of decorated closed cones. As a generalization of the double shuffle relation of multiple zeta values, we give the double subdivision relation of conical zeta values and formulate the extended double subdivision relation conjecture for conical zeta values.

Let M be a closed connected spin manifold of dimension 2 or 3 with a fixed orientation and a fixed spin structure. We prove that for a generic Riemannian metric on M the non-harmonic eigenspinors of the Dirac operator are nowhere zero. The proof is based on a transversality theorem and the unique continuation property of the Dirac operator.

We study mixed boundary value problems, here mainly of Zaremba type for the Laplacian within an edge algebra of boundary value problems. The edge here is the interface of the jump from the Dirichlet to the Neumann condition. In contrast to earlier descriptions of mixed problems within such an edge calculus, cf. (Harutjunjan and Schulze, Elliptic mixed, transmission and singular crack problems, 2008), we focus on new Mellin edge quantisations of the Dirichlet-to-Neumann operator on the Neumann side of the boundary and employ a pseudo-differential calculus of corresponding boundary value problems without the transmission property at the interface. This allows us to construct parametrices for the original mixed problem in a new and transparent way.

We establish a quantisation of corner-degenerate symbols, here called Mellin-edge quantisation, on a manifold with second order singularities. The typical ingredients come from the "most singular" stratum of which is a second order edge where the infinite transversal cone has a base that is itself a manifold with smooth edge. The resulting operator-valued amplitude functions on the second order edge are formulated purely in terms of Mellin symbols taking values in the edge algebra over . In this respect our result is formally analogous to a quantisation rule of (Osaka J. Math. 37:221-260, 2000) for the simpler case of edge-degenerate symbols that corresponds to the singularity order 1. However, from the singularity order 2 on there appear new substantial difficulties for the first time, partly caused by the edge singularities of the cone over that tend to infinity.

In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.

Two recent works have adapted the Kalman-Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance.
We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables.
Finally, the performance of our ensemble transform Kalman-Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.

The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.

We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.

We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.

The subject of this paper is solutions of an autoresonance equation. We look for a connection between the parameters of the solution bounded as t -> -infinity, and the parameters of two two-parameter families of solutions as t -> infinity. One family consists of the solutions which are not captured into resonance, and another of those increasing solutions which are captured into resonance. In this way we describe the transition through the separatrix for equations with slowly varying parameters and get an estimate for parameters before the resonance of those solutions which may be captured into autoresonance. (C) 2014 AIP Publishing LLC.

We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.

In the limit 0 we analyse the generators H of families of reversible jump processes in Rd associated with a class of symmetric non-local Dirichlet-forms and show exponential decay of the eigenfunctions. The exponential rate function is a Finsler distance, given as solution of a certain eikonal equation. Fine results are sensitive to the rate function being C2 or just Lipschitz. Our estimates are analogous to the semiclassical Agmon estimates for differential operators of second order. They generalize and strengthen previous results on the lattice Zd. Although our final interest is in the (sub)stochastic jump process, technically this is a pure analysis paper, inspired by PDE techniques.

The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.

We characterize maximal subsemigroups of the monoid T(X) of all transformations on the set X = a"center dot of natural numbers containing a given subsemigroup W of T(X) such that T(X) is finitely generated over W. This paper gives a contribution to the characterization of maximal subsemigroups on the monoid of all transformations on an infinite set.

We analyze a general class of difference operators H(epsilon) = T(epsilon) + V(epsilon) on l(2)((epsilon Z)(d)), where V(epsilon) is a one-well potential and epsilon is a small parameter. We construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low lying eigenvalues of H(epsilon). These are obtained from eigenfunctions or quasimodes for the operator H(epsilon), acting on L(2)(R(d)), via restriction to the lattice (epsilon Z)(d).

Borehole logs provide in situ information about the fluctuations of petrophysical properties with depth and thus allow the characterization of the crustal heterogeneities. A detailed investigation of these measurements may lead to extract features of the geological media. In this study, we suggest a regularity analysis based on the continuous wavelet transform to examine sonic logs data. The description of the local behavior of the logs at each depth is carried out using the local Hurst exponent estimated by two (02) approaches: the local wavelet approach and the average-local wavelet approach. Firstly, a synthetic log, generated using the random midpoints displacement algorithm, is processed by the regularity analysis. The obtained Hurst curves allowed the discernment of the different layers composing the simulated geological model. Next, this analysis is extended to real sonic logs data recorded at the Kontinentales Tiefbohrprogramm (KTB) pilot borehole (Continental Deep Drilling Program, Germany). The results show a significant correlation between the estimated Hurst exponents and the lithological discontinuities crossed by the well. Hence, the Hurst exponent can be used as a tool to characterize underground heterogeneities.

We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.

During preclinical development of a gestagenic drug, a significant increase of the total plasma concentration was observed after multiple dosing in pregnant rabbits, but not in (non-pregnant) rats or monkeys. We used a PBPK modeling approach in combination with in vitro and in vivo data to address the question to what extent the pharmacologically active free drug concentration is affected by pregnancy induced processes. In human, a significant increase in sex hormone binding globulin (SHBG), and an induction of hepatic CYP3A4 as well as plasma esterases is observed during pregnancy. We find that the observed increase in total plasma trough levels in rabbits can be explained as a combined result of (i) drug accumulation due to multiple dosing, (ii) increase of the binding protein SHBG, and (iii) clearance induction. For human, we predict that free drug concentrations in plasma would not increase during pregnancy above the steady state trough level for non-pregnant women.

The human immunodeficiency virus (HIV) can be suppressed by highly active anti-retroviral therapy (HAART) in the majority of infected patients. Nevertheless, treatment interruptions inevitably result in viral rebounds from persistent, latently infected cells, necessitating lifelong treatment. Virological failure due to resistance development is a frequent event and the major threat to treatment success. Currently, it is recommended to change treatment after the confirmation of virological failure. However, at the moment virological failure is detected, drug resistant mutants already replicate in great numbers. They infect numerous cells, many of which will turn into latently infected cells. This pool of cells represents an archive of resistance, which has the potential of limiting future treatment options. The objective of this study was to design a treatment strategy for treatment-naive patients that decreases the likelihood of early treatment failure and preserves future treatment options. We propose to apply a single, pro-active treatment switch, following a period of treatment with an induction regimen. The main goal of the induction regimen is to decrease the abundance of randomly generated mutants that confer resistance to the maintenance regimen, thereby increasing subsequent treatment success. Treatment is switched before the overgrowth and archiving of mutant strains that carry resistance against the induction regimen and would limit its future re-use. In silico modelling shows that an optimal trade-off is achieved by switching treatment at & 80 days after the initiation of antiviral therapy. Evaluation of the proposed treatment strategy demonstrated significant improvements in terms of resistance archiving and virological response, as compared to conventional HAART. While continuous pro-active treatment alternation improved the clinical outcome in a randomized trial, our results indicate that a similar improvement might also be reached after a single pro-active treatment switch. The clinical validity of this finding, however, remains to be shown by a corresponding trial.

Asymptotic first exit times of the chafee-infante equation with small heavy-tailed levy noise
(2011)

This article studies the behavior of stochastic reaction-diffusion equations driven by additive regularly varying pure jump Levy noise in the limit of small noise intensity. It is shown that the law of the suitably renormalized first exit times from the domain of attraction of a stable state converges to an exponential law of parameter 1 in a strong sense of Laplace transforms, including exponential moments. As a consequence, the expected exit times increase polynomially in the inverse intensity, in contrast to Gaussian perturbations, where this growth is known to be of exponential rate.

The goal of this paper is to establish the existence of a foliation of the asymptotic region of an asymptotically flat manifold with positive mass by surfaces which are critical points of the Willmore functional subject to an area constraint. Equivalently these surfaces are critical points of the Geroch-Hawking mass. Thus our result has applications in the theory of general relativity.

In this article, the fractional variational iteration method is employed for computing the approximate analytical solutions of degenerate parabolic equations with fractional time derivative. The time-fractional derivatives are described by the use of a new approach, the so-called Jumarie modified Riemann-Liouville derivative, instead in the sense of Caputo. The approximate solutions of our model problem are calculated in the form of convergent series with easily computable components. Moreover, the numerical solution is compared with the exact solution and the quantitative estimate of accuracy is obtained. The results of the study reveal that the proposed method with modified fractional Riemann-Liouville derivatives is efficient, accurate, and convenient for solving the fractional partial differential equations in multi-dimensional spaces without using any linearization, perturbation or restrictive assumptions.

We consider a resonantly perturbed system of coupled nonlinear oscillators with small dissipation and outer periodic perturbation. We show that for the large time t similar to s(-2) one component of the system is described for the most part by the inhomogeneous Mathieu equation while the other component represents pulsation of large amplitude. A Hamiltonian system is obtained which describes for the most part the behavior of the envelope in a special case. The analytic results agree with numerical simulations.

The thermospheric crosswind velocities at an altitude of 400 km measured by an accelerometer on board of the CHAMP satellite are compared with the results of model calculations performed using the Upper Atmosphere Model (UAM). The results of measurements averaged over the year in 2003 reveal a two-vortex structure of high-latitude winds corresponding to magnetospheric-ionospheric convection of ions in the F2 ionosphere region. A similar picture with similar speed values was obtained in model calculations. A comparison of the crosswind speed obtained in individual measurements on October 28, 2003 with the corresponding model values revealed close agreement between them in some flights and differences in others. Taking into account the dependence of convection electric field on the B (y) component of interplanetary magnetic field sometimes improved agreement between thermospheric crosswind speeds obtained in model calculations and measured using the satellite.

The morphological features in the deviations of the total electron content (TEC) of the ionosphere from the background undisturbed state as possible precursors of the earthquake of January 12, 2010 (21:53 UT (16:53 LT), 18.46A degrees N, 72.5A degrees W, 7.0 M) in Haiti are analyzed. To identify these features, global and regional differential TEC maps based on global 2-h TEC maps provided by NASA in the IONEX format were plotted. For the considered earthquake, long-lived disturbances, presumably of seismic origin, were localized in the near-epicenter area and were accompanied by similar effects in the magnetoconjugate region. Both decreases and increases in the local TEC over the period from 22 UT of January 10 to 08 UT of January 12, 2010 were observed. The horizontal dimensions of the anomalies were similar to 40A degrees in longitude and similar to 20A degrees in latitude, with the magnitude of TEC disturbances reaching similar to 40% relative to the background near the epicenter and more than 50% in the magnetoconjugate area. No significant geomagnetic disturbances within January 1-12, 2010 were observed, i.e., the detected TEC anomalies were manifestations of interplay between processes in the lithosphere-atmosphere-ionosphere system.

We study a new approach to determine the asymptotic behaviour of quantum many-particle systems near coalescence points of particles which interact via singular Coulomb potentials. This problem is of fundamental interest in electronic structure theory in order to establish accurate and efficient models for numerical simulations. Within our approach, coalescence points of particles are treated as embedded geometric singularities in the configuration space of electrons. Based on a general singular pseudo-differential calculus, we provide a recursive scheme for the calculation of the parametrix and corresponding Green operator of a nonrelativistic Hamiltonian. In our singular calculus, the Green operator encodes all the asymptotic information of the eigenfunctions. Explicit calculations and an asymptotic representation for the Green operator of the hydrogen atom and isoelectronic ions are presented.

We propose a novel strategy for global sensitivity analysis of ordinary differential equations. It is based on an error-controlled solution of the partial differential equation (PDE) that describes the evolution of the probability density function associated with the input uncertainty/variability. The density yields a more accurate estimate of the output uncertainty/variability, where not only some observables (such as mean and variance) but also structural properties (e.g., skewness, heavy tails, bi-modality) can be resolved up to a selected accuracy. For the adaptive solution of the PDE Cauchy problem we use the Rothe method with multiplicative error correction, which was originally developed for the solution of parabolic PDEs. We show that, unlike in parabolic problems, conservation properties necessitate a coupling of temporal and spatial accuracy to avoid accumulation of spatial approximation errors over time. We provide convergence conditions for the numerical scheme and suggest an implementation using approximate approximations for spatial discretization to efficiently resolve the coupling of temporal and spatial accuracy. The performance of the method is studied by means of low-dimensional case studies. The favorable properties of the spatial discretization technique suggest that this may be the starting point for an error-controlled sensitivity analysis in higher dimensions.

We discuss to what extent a given earthquake catalog and the assumption of a doubly truncated Gutenberg-Richter distribution for the earthquake magnitudes allow for the calculation of confidence intervals for the maximum possible magnitude M. We show that, without further assumptions such as the existence of an upper bound of M, only very limited information may be obtained. In a frequentist formulation, for each confidence level alpha the confidence interval diverges with finite probability. In a Bayesian formulation, the posterior distribution of the upper magnitude is not normalizable. We conclude that the common approach to derive confidence intervals from the variance of a point estimator fails. Technically, this problem can be overcome by introducing an upper bound (M) over tilde for the maximum magnitude. Then the Bayesian posterior distribution can be normalized, and its variance decreases with the number of observed events. However, because the posterior depends significantly on the choice of the unknown value of (M) over tilde, the resulting confidence intervals are essentially meaningless. The use of an informative prior distribution accounting for pre-knowledge of M is also of little use, because the prior is only modified in the case of the occurrence of an extreme event. Our results suggest that the maximum possible magnitude M should be better replaced by M(T), the maximum expected magnitude in a given time interval T, for which the calculation of exact confidence intervals becomes straightforward. From a physical point of view, numerical models of the earthquake process adjusted to specific fault regions may be a powerful alternative to overcome the shortcomings of purely statistical inference.

The problem of an ensemble Kalman filter when only partial observations are available is considered. In particular, the situation is investigated where the observational space consists of variables that are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables a variance-limiting Kalman filter (VLKF) is derived in a variational setting. The VLKF for a simple linear toy model is analyzed and its range of optimal performance is determined. The VLKF is explored in an ensemble transform setting for the Lorenz-96 system, and it is shown that incorporating the information of the variance of some unobservable variables can improve the skill and also increase the stability of the data assimilation procedure.

We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.

In this paper, we propose a derivative-free method for recovering symmetric and non-symmetric potential functions of inverse Sturm-Liouville problems from the knowledge of eigenvalues. A class of boundary value methods obtained as an extension of Numerov's method is the major tool for approximating the eigenvalues in each Broyden iteration step. Numerical examples demonstrate that the method is able to reduce the number of iteration steps, in particular for non-symmetric potentials, without accuracy loss.

Range of lower bounds
(2011)

Each of n jobs is to be processed without interruption on a single machine. Each job becomes available for processing at time zero. The objective is to find a processing order of the jobs which minimizes the sum of weighted completion times added with maximum weighted tardiness. In this paper we give a general case of the theorem that given in [6]. This theorem shows a relation between the number of efficient solutions, lower bound LB and optimal solution. It restricts the range of the lower bound, which is the main factor to find the optimal solution. Also, the theorem opens algebraic operations and concepts to find new lower bounds.

Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.

We define the Dirichlet to Neumann operator for an elliptic complex of first order differential operators on a compact Riemannian manifold with boundary. Under reasonable conditions the Betti numbers of the complex prove to be completely determined by the Dirichlet to Neumann operator on the boundary.

We prove a theorem on separation of boundary null points for generators of continuous semigroups of holomorphic self-mappings of the unit disk in the complex plane. Our construction demonstrates rather strikingly the particular role of the binary operation au broken vertical bar given by 1/ f au broken vertical bar g = 1/f + 1/g on generators.

This note is a revised and enlarged version of the german article [16] in a slightly different framework. We here correct a serious mistake in the first version and generalize the class of Polya sum processes considered there. (A corrected version of the same results can be found already in the thesis of Mathias Rafler [12].) Moreover, the class of Polya difference processes is constructed here for the first time. In analogy to classical statistical mechanics we propose a theory of interacting Bosons and Fermions. We consider Papangelou processes. These are point processes specified by some kernel which represents the conditional intensity of the process. The main result is a general construction of a large class of such processes which contains Cox, Gibbs processes of classical statistical mechanics, but also interacting Bose and Fermi processes.

We define several notions of singular set for Type-I Ricci flows and show that they all coincide. In order to do this, we prove that blow-ups around singular points converge to nontrivial gradient shrinking solitons, thus extending work of Naber [15]. As a by-product we conclude that the volume of a finite-volume singular set vanishes at the singular time.
We also define a notion of density for Type-I Ricci flows and use it to prove a regularity theorem reminiscent of White's partial regularity result for mean curvature flow [22].

We study the averaged macroscopic strain tensor for a sand pile consisting of soft convex polygonal particles numerically, using the discrete-element method (DEM). First, we construct two types of "sand piles" by two different pouring protocols. Afterwards, we deform the sand piles, relaxing them under a 10% reduction of gravity. Four different types of methods, three best-fit strains and a derivative strain, are adopted for determining the strain distribution under a sand pile. The results of four different versions of strains obtained from DEM simulation are compared with each other. Moreover, we compare the vertical normal strain tensor between two types of sand piles qualitatively and show how the construction history of the piles affects their strain distribution.

We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.

A partial transformation alpha on an n-element chain X-n is called order-preserving if x <= y implies x alpha <= y alpha for all x, y in the domain of alpha and it is called extensive if x <= x alpha for all x in the domain of alpha. The set of all partial order-preserving extensive transformations on X-n forms a semiband POEn. We determine the maximal subsemigroups as well as the maximal subsemibands of POEn.