Refine
Has Fulltext
- no (33)
Year of publication
- 2014 (33) (remove)
Document Type
- Article (33) (remove)
Language
- English (33) (remove)
Is part of the Bibliography
- yes (33)
Keywords
- Earthquake interaction (2)
- Statistical seismology (2)
- 35J70 (1)
- 47A52 (1)
- 47G30 (1)
- 58J40 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- Algorithmic (1)
Institute
- Institut für Mathematik (33) (remove)
Ausgehend von der typischen IT‐Infrastruktur für E‐Learning an Hochschulen auf der einen Seite sowie vom bisherigen Stand der Forschung zu Personal Learning Environments (PLEs) auf der anderen Seite zeigt dieser Beitrag auf, wie bestehende Werkzeuge bzw. Dienste zusammengeführt und für die Anforderungen der modernen, rechnergestützten Präsenzlehre aufbereitet werden können. Für diesen interdisziplinären Entwicklungsprozess bieten sowohl klassische Softwareentwicklungsverfahren als auch bestehende PLE‐Modelle wenig Hilfestellung an. Der Beitrag beschreibt die in einem campusweiten Projekt an der Universität Potsdam verfolgten Ansätze und die damit erzielten Ergebnisse. Dafür werden zunächst typische Lehr‐/Lern‐bzw. Kommunikations‐Szenarien identifiziert, aus denen Anforderungen an eine unterstützende Plattform abgeleitet werden. Dies führt zu einer umfassenden Sammlung zu berücksichtigender Dienste und deren Funktionen, die gemäß den Spezifika ihrer Nutzung in ein Gesamtsystem zu integrieren sind. Auf dieser Basis werden grundsätzliche Integrationsansätze und technische Details dieses Mash‐Ups in einer Gesamtschau aller relevanten Dienste betrachtet und in eine integrierende Systemarchitektur überführt. Deren konkrete Realisierung mit Hilfe der Portal‐Technologie Liferay wird dargestellt, wobei die eingangs definierten Szenarien aufgegriffen und exemplarisch vorgestellt werden. Ergänzende Anpassungen im Sinne einer personalisierbaren bzw. adaptiven Lern‐(und Arbeits‐)Umgebung werden ebenfalls unterstützt und kurz aufgezeigt.
We investigate nonlinear problems which appear as Euler-Lagrange equations for a variational problem. They include in particular variational boundary value problems for nonlinear elliptic equations studied by F. Browder in the 1960s. We establish a solvability criterion of such problems and elaborate an efficient orthogonal projection method for constructing approximate solutions.
In quantum mechanics the temporal decay of certain resonance states is associated with an effective time evolution e(-ith(kappa)), where h(.) is an analytic family of non-self-adjoint matrices. In general the corresponding resonance states do not decay exponentially in time. Using analytic perturbation theory, we derive asymptotic expansions for e(-ith(kappa)), simultaneously in the limits kappa -> 0 and t -> infinity, where the corrections with respect to pure exponential decay have uniform bounds in one complex variable kappa(2)t.
In the Appendix we briefly review analytic perturbation theory, replacing the classical reference to the 1920 book of Knopp [Funktionentheorie II, Anwendungen und Weiterfuhrung der allgemeinen Theorie, Sammlung Goschen, Vereinigung wissenschaftlicher Verleger Walter de Gruyter, 1920] and its terminology by standard modern references. This might be of independent interest.
We consider infinite-dimensional diffusions where the interaction between the coordinates has a finite extent both in space and time. In particular, it is not supposed to be smooth or Markov. The initial state of the system is Gibbs, given by a strong summable interaction. If the strongness of this initial interaction is lower than a suitable level, and if the dynamical interaction is bounded from above in a right way, we prove that the law of the diffusion at any time t is a Gibbs measure with absolutely summable interaction. The main tool is a cluster expansion in space uniformly in time of the Girsanov factor coming from the dynamics and exponential ergodicity of the free dynamics to an equilibrium product measure.
We study two notions of relative differential cohomology, using the model of differential characters. The two notions arise from the two options to construct relative homology, either by cycles of a quotient complex or of a mapping cone complex. We discuss the relation of the two notions of relative differential cohomology to each other. We discuss long exact sequences for both notions, thereby clarifying their relation to absolute differential cohomology. We construct the external and internal product of relative and absolute characters and show that relative differential cohomology is a right module over the absolute differential cohomology ring. Finally we construct fiber integration and transgression for relative differential characters.
We study Cheeger-Simons differential characters and provide geometric descriptions of the ring structure and of the fiber integration map. The uniqueness of differential cohomology (up to unique natural transformation) is proved by deriving an explicit formula for any natural transformation between a differential cohomology theory and the model given by differential characters. Fiber integration for fibers with boundary is treated in the context of relative differential characters. As applications we treat higher-dimensional holonomy, parallel transport, and transgression.
We consider a finite-dimensional deterministic dynamical system with the global attractor ? which supports a unique ergodic probability measure P. The measure P can be considered as the uniform long-term mean of the trajectories staying in a bounded domain D containing ?. We perturb the dynamical system by a multiplicative heavy tailed Levy noise of small intensity E>0 and solve the asymptotic first exit time and location problem from D in the limit of E?0. In contrast to the case of Gaussian perturbations, the exit time has an algebraic exit rate as a function of E, just as in the case when ? is a stable fixed point studied earlier in [9, 14, 19, 26]. As an example, we study the first exit problem from a neighborhood of the stable limit cycle for the Van der Pol oscillator perturbed by multiplicative -stable Levy noise.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
In this paper a linear-time algorithm for the minimization of acyclic deterministic finite-state automata is presented. The algorithm runs significantly faster than previous algorithms for the same task. This is shown by a comparison of the running times of both algorithms. Additionally, a variation of the new algorithm is presented which handles cyclic automata as input. The new cycle-aware algorithm minimizes acyclic automata in the desired way. In case of cyclic input, the algorithm minimizes all acyclic suffixes of the input automaton.
We consider statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses, under the assumption that a suitable single test (and corresponding p-value) is known for each individual hypothesis. We extend to this setting the notion of false discovery rate (FDR) as a measure of type I error. Our main result studies specific procedures based on the observation of the p-value process. Control of the FDR at a nominal level is ensured either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting. Its interest is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables.
We introduce the concept of a conical zeta value as a geometric generalization of a multiple zeta value in the context of convex cones. The quasi-shuffle and shuffle relations of multiple zeta values are generalized to open cone subdivision and closed cone subdivision relations respectively for conical zeta values. In order to achieve the closed cone subdivision relation, we also interpret linear relations among fractions as subdivisions of decorated closed cones. As a generalization of the double shuffle relation of multiple zeta values, we give the double subdivision relation of conical zeta values and formulate the extended double subdivision relation conjecture for conical zeta values.
Let M be a closed connected spin manifold of dimension 2 or 3 with a fixed orientation and a fixed spin structure. We prove that for a generic Riemannian metric on M the non-harmonic eigenspinors of the Dirac operator are nowhere zero. The proof is based on a transversality theorem and the unique continuation property of the Dirac operator.
We study mixed boundary value problems, here mainly of Zaremba type for the Laplacian within an edge algebra of boundary value problems. The edge here is the interface of the jump from the Dirichlet to the Neumann condition. In contrast to earlier descriptions of mixed problems within such an edge calculus, cf. (Harutjunjan and Schulze, Elliptic mixed, transmission and singular crack problems, 2008), we focus on new Mellin edge quantisations of the Dirichlet-to-Neumann operator on the Neumann side of the boundary and employ a pseudo-differential calculus of corresponding boundary value problems without the transmission property at the interface. This allows us to construct parametrices for the original mixed problem in a new and transparent way.
We establish a quantisation of corner-degenerate symbols, here called Mellin-edge quantisation, on a manifold with second order singularities. The typical ingredients come from the "most singular" stratum of which is a second order edge where the infinite transversal cone has a base that is itself a manifold with smooth edge. The resulting operator-valued amplitude functions on the second order edge are formulated purely in terms of Mellin symbols taking values in the edge algebra over . In this respect our result is formally analogous to a quantisation rule of (Osaka J. Math. 37:221-260, 2000) for the simpler case of edge-degenerate symbols that corresponds to the singularity order 1. However, from the singularity order 2 on there appear new substantial difficulties for the first time, partly caused by the edge singularities of the cone over that tend to infinity.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Two recent works have adapted the Kalman-Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance.
We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables.
Finally, the performance of our ensemble transform Kalman-Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.
The inverse problem of determining the flow at the Earth's core-mantle boundary according to an outer core magnetic field and secular variation model has been investigated through a Bayesian formalism. To circumvent the issue arising from the truncated nature of the available fields, we combined two modeling methods. In the first step, we applied a filter on the magnetic field to isolate its large scales by reducing the energy contained in its small scales, we then derived the dynamical equation, referred as filtered frozen flux equation, describing the spatiotemporal evolution of the filtered part of the field. In the second step, we proposed a statistical parametrization of the filtered magnetic field in order to account for both its remaining unresolved scales and its large-scale uncertainties. These two modeling techniques were then included in the Bayesian formulation of the inverse problem. To explore the complex posterior distribution of the velocity field resulting from this development, we numerically implemented an algorithm based on Markov chain Monte Carlo methods. After evaluating our approach on synthetic data and comparing it to previously introduced methods, we applied it to a magnetic field model derived from satellite data for the single epoch 2005.0. We could confirm the existence of specific features already observed in previous studies. In particular, we retrieved the planetary scale eccentric gyre characteristic of flow evaluated under the compressible quasi-geostrophy assumption although this hypothesis was not considered in our study. In addition, through the sampling of the velocity field posterior distribution, we could evaluate the reliability, at any spatial location and at any scale, of the flow we calculated. The flow uncertainties we determined are nevertheless conditioned by the choice of the prior constraints we applied to the velocity field.
We discuss the solution theory of operators of the form del(x) + A, acting on smooth sections of a vector bundle with connection del over a manifold M, where X is a vector field having a critical point with positive linearization at some point p is an element of M. As an operator on a suitable space of smooth sections Gamma(infinity)(U, nu), it fulfills a Fredholm alternative, and the same is true for the adjoint operator. Furthermore, we show that the solutions depend smoothly on the data del, X and A.
We describe an iterative method to combine seismicity forecasts. With this method, we produce the next generation of a starting forecast by incorporating predictive skill from one or more input forecasts. For a single iteration, we use the differential probability gain of an input forecast relative to the starting forecast. At each point in space and time, the rate in the next-generation forecast is the product of the starting rate and the local differential probability gain. The main advantage of this method is that it can produce high forecast rates using all types of numerical forecast models, even those that are not rate-based. Naturally, a limitation of this method is that the input forecast must have some information not already contained in the starting forecast. We illustrate this method using the Every Earthquake a Precursor According to Scale (EEPAS) and Early Aftershocks Statistics (EAST) models, which are currently being evaluated at the US testing center of the Collaboratory for the Study of Earthquake Predictability. During a testing period from July 2009 to December 2011 (with 19 target earthquakes), the combined model we produce has better predictive performance - in terms of Molchan diagrams and likelihood - than the starting model (EEPAS) and the input model (EAST). Many of the target earthquakes occur in regions where the combined model has high forecast rates. Most importantly, the rates in these regions are substantially higher than if we had simply averaged the models.
The subject of this paper is solutions of an autoresonance equation. We look for a connection between the parameters of the solution bounded as t -> -infinity, and the parameters of two two-parameter families of solutions as t -> infinity. One family consists of the solutions which are not captured into resonance, and another of those increasing solutions which are captured into resonance. In this way we describe the transition through the separatrix for equations with slowly varying parameters and get an estimate for parameters before the resonance of those solutions which may be captured into autoresonance. (C) 2014 AIP Publishing LLC.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
In the limit 0 we analyse the generators H of families of reversible jump processes in Rd associated with a class of symmetric non-local Dirichlet-forms and show exponential decay of the eigenfunctions. The exponential rate function is a Finsler distance, given as solution of a certain eikonal equation. Fine results are sensitive to the rate function being C2 or just Lipschitz. Our estimates are analogous to the semiclassical Agmon estimates for differential operators of second order. They generalize and strengthen previous results on the lattice Zd. Although our final interest is in the (sub)stochastic jump process, technically this is a pure analysis paper, inspired by PDE techniques.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
We characterize maximal subsemigroups of the monoid T(X) of all transformations on the set X = a"center dot of natural numbers containing a given subsemigroup W of T(X) such that T(X) is finitely generated over W. This paper gives a contribution to the characterization of maximal subsemigroups on the monoid of all transformations on an infinite set.
Creation of topographic maps
(2014)
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.