510 Mathematik
Refine
Year of publication
Document Type
- Article (249)
- Preprint (93)
- Doctoral Thesis (75)
- Postprint (29)
- Monograph/Edited Volume (10)
- Other (10)
- Master's Thesis (6)
- Part of a Book (5)
- Conference Proceeding (5)
- Review (3)
Is part of the Bibliography
- yes (489) (remove)
Keywords
- data assimilation (8)
- regularization (8)
- Bayesian inference (7)
- Dirac operator (6)
- Navier-Stokes equations (6)
- cluster expansion (6)
- discrepancy principle (6)
- index (6)
- Cauchy problem (5)
- Fredholm property (5)
Institute
- Institut für Mathematik (425)
- Institut für Physik und Astronomie (14)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Extern (9)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Biochemie und Biologie (6)
- Institut für Informatik und Computational Science (5)
- Department Psychologie (4)
- Department Grundschulpädagogik (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Institut für Philosophie (3)
- Strukturbereich Kognitionswissenschaften (3)
- Historisches Institut (2)
- Institut für Geowissenschaften (2)
- Präsident | Vizepräsidenten (2)
- Fachgruppe Politik- & Verwaltungswissenschaft (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Juristische Fakultät (1)
- Wirtschaftswissenschaften (1)
For a given subcritical discrete Schrodinger operator H on a weighted infinite graph X, we construct a Hardy-weight w which is optimal in the following sense. The operator H - lambda w is subcritical in X for all lambda < 1, null-critical in X for lambda = 1, and supercritical near any neighborhood of infinity in X for any lambda > 1. Our results rely on a criticality theory for Schrodinger operators on general weighted graphs.
For an arbitrary euclidean field F we introduce a central extension (G(F), Phi) of SL(2, F) admitting a left-ordering and study its algebraic properties. The elements of G(F) are order preserving bijections of the convex hull of Q in F. If F = R then G(F) is isomorphic to the classical universal covering group of the Lie group SL(2, R). Among other results we show that G(F) is a perfect group which possesses a rank 1 cone of exceptional type. We also prove that its centre is an infinite cyclic group and investigate its normal subgroups.
We prove finiteness and diameter bounds for graphs having a positive Ricci-curvature bound in the Bakry–Émery sense. Our first result using only curvature and maximal vertex degree is sharp in the case of hypercubes. The second result depends on an additional dimension bound, but is independent of the vertex degree. In particular, the second result is the first Bonnet–Myers type theorem for unbounded graph Laplacians. Moreover, our results improve diameter bounds from Fathi and Shu (Bernoulli 24(1):672–698, 2018) and Horn et al. (J für die reine und angewandte Mathematik (Crelle’s J), 2017, https://doi.org/10.1515/crelle-2017-0038) and solve a conjecture from Cushing et al. (Bakry–Émery curvature functions of graphs, 2016).
We complete the picture how the asymptotic behavior of a dynamical system is reflected by properties of the associated Perron-Frobenius operator. Our main result states that strong convergence of the powers of the Perron-Frobenius operator is equivalent to setwise convergence of the underlying dynamic in the measure algebra. This situation is furthermore characterized by uniform mixing-like properties of the system.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
We analyze a general class of self-adjoint difference operators H-epsilon = T-epsilon + V-epsilon on l(2)((epsilon Z)(d)), where V-epsilon is a multi-well potential and v(epsilon) is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]). Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H-epsilon is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H-epsilon, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H-epsilon converge to the first n eigenvalues of the direct sum of harmonic oscillators on R-d located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H-epsilon. These are obtained from eigenfunctions or quasimodes for the operator H-epsilon acting on L-2(R-d), via restriction to the lattice (epsilon Z)(d). Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrodinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted l(2)-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two "wells" (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrodinger operator in [22].
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km(2)) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
We establish essential steps of an iterative approach to operator algebras, ellipticity and Fredholm property on stratified spaces with singularities of second order. We cover, in particular, corner-degenerate differential operators. Our constructions are focused on the case where no additional conditions of trace and potential type are posed, but this case works well and will be considered in a forthcoming paper as a conclusion of the present calculus.
Earthquake rates are driven by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic processes. Although the origin of the first two sources is known, transient aseismic processes are more difficult to detect. However, the knowledge of the associated changes of the earthquake activity is of great interest, because it might help identify natural aseismic deformation patterns such as slow-slip events, as well as the occurrence of induced seismicity related to human activities. For this goal, we develop a Bayesian approach to identify change-points in seismicity data automatically. Using the Bayes factor, we select a suitable model, estimate possible change-points, and we additionally use a likelihood ratio test to calculate the significance of the change of the intensity. The approach is extended to spatiotemporal data to detect the area in which the changes occur. The method is first applied to synthetic data showing its capability to detect real change-points. Finally, we apply this approach to observational data from Oklahoma and observe statistical significant changes of seismicity in space and time.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
The important role that metacognition plays as a predictor for student mathematical learning and for mathematical problem-solving, has been extensively documented. But only recently has attention turned to primary grades, and more research is needed at this level. The goals of this paper are threefold: (1) to present metacognitive framework during mathematics problem-solving, (2) to describe their multi-method interview approach developed to study student mathematical metacognition, and (3) to empirically evaluate the utility of their model and the adaptation of their approach in the context of grade 2 and grade 4 mathematics problem-solving. The results are discussed not only with regard to further development of the adapted multi-method interview approach, but also with regard to their theoretical and practical implications.
This article assesses the distance between the laws of stochastic differential equations with multiplicative Levy noise on path space in terms of their characteristics. The notion of transportation distance on the set of Levy kernels introduced by Kosenkova and Kulik yields a natural and statistically tractable upper bound on the noise sensitivity. This extends recent results for the additive case in terms of coupling distances to the multiplicative case. The strength of this notion is shown in a statistical implementation for simulations and the example of a benchmark time series in paleoclimate.
Frühe mathematische Bildung
(2018)
Im vorliegenden Beitrag werden aktuelle Forschungstrends im Bereich der frühen mathematischen Bildung im Kontext jüngst formulierter Zieldimensionen für die frühe mathematische Bildung (siehe Benz et al., 2017) dargestellt. Es wird auf spielbasierte Fördermaßnahmen, Kompetenzen im Bereich „Raum und Form“, den Einfluss sprachlicher Parameter auf die Entwicklung mathematischer Kompetenzen sowie auf mathematikbezogene Kompetenzen frühpädagogischer Fachkräfte eingegangen. Darüber hinaus werden die Ergebnisse einer aktuellen Feldstudie zur Förderung früher mathematischer Kompetenzen (siehe Dillon, Kannan, Dean, Spelke & Duflo, 2017) vorgestellt. Abschließend wird die Entwicklung und Implementierung anschlussfähiger Bildungskonzepte als eine der zentralen Herausforderungen zukünftiger Forschungs- und Bildungsbemühungen diskutiert
We consider the problem of low rank matrix recovery in a stochastically noisy high-dimensional setting. We propose a new estimator for the low rank matrix, based on the iterative hard thresholding method, that is computationally efficient and simple. We prove that our estimator is optimal in terms of the Frobenius risk and in terms of the entry-wise risk uniformly over any change of orthonormal basis, allowing us to provide the limiting distribution of the estimator. When the design is Gaussian, we prove that the entry-wise bias of the limiting distribution of the estimator is small, which is of interest for constructing tests and confidence sets for low-dimensional subsets of entries of the low rank matrix.
Die Bienaymé-Galton-Watson Prozesse können für die Untersuchung von speziellen und sich entwickelnden Populationen verwendet werden. Die Populationen umfassen Individuen, welche sich identisch, zufällig, selbstständig und unabhängig voneinander fortpflanzen und die jeweils nur eine Generation existieren. Die n-te Generation ergibt sich als zufällige Summe der Individuen der (n-1)-ten Generation. Die Relevanz dieser Prozesse begründet sich innerhalb der Historie und der inner- und außermathematischen Bedeutung. Die Geschichte der Bienaymé-Galton-Watson-Prozesse wird anhand der Entwicklung des Konzeptes bis heute dargestellt. Dabei werden die Wissenschaftler:innen verschiedener Disziplinen angeführt, die Erkenntnisse zu dem Themengebiet beigetragen und das Konzept in ihren Fachbereichen angeführt haben. Somit ergibt sich die außermathematische Signifikanz. Des Weiteren erhält man die innermathematische Bedeutsamkeit mittels des Konzeptes der Verzweigungsprozesse, welches auf die Bienaymé-Galton-Watson Prozesse zurückzuführen ist. Die Verzweigungsprozesse stellen eines der aussagekräftigsten Modelle für die Beschreibung des Populationswachstums dar. Darüber hinaus besteht die derzeitige Wichtigkeit durch die Anwendungsmöglichkeit der Verzweigungsprozesse und der Bienaymé-Galton-Watson Prozesse innerhalb der Epidemiologie. Es werden die Ebola- und die Corona-Pandemie als Anwendungsfelder angeführt. Die Prozesse dienen als Entscheidungsstütze für die Politik und ermöglichen Aussagen über die Auswirkungen von Maßnahmen bezüglich der Pandemien. Neben den Prozessen werden ebenfalls der bedingte Erwartungswert bezüglich diskreter Zufallsvariablen, die wahrscheinlichkeitserzeugende Funktion und die zufällige Summe eingeführt. Die Konzepte vereinfachen die Beschreibung der Prozesse und bilden somit die Grundlage der Betrachtungen. Außerdem werden die benötigten und weiterführenden Eigenschaften der grundlegenden Themengebiete und der Prozesse aufgeführt und bewiesen. Das Kapitel erreicht seinen Höhepunkt bei dem Beweis des Kritikalitätstheorems, wodurch eine Aussage über das Aussterben des Prozesses in verschiedenen Fällen und somit über die Aussterbewahrscheinlichkeit getätigt werden kann. Die Fälle werden anhand der zu erwartenden Anzahl an Nachkommen eines Individuums unterschieden. Es zeigt sich, dass ein Prozess bei einer zu erwartenden Anzahl kleiner gleich Eins mit Sicherheit ausstirbt und bei einer Anzahl größer als Eins, die Population nicht in jedem Fall aussterben muss. Danach werden einzelne Beispiele, wie der linear fractional case, die Population von Fibroblasten (Bindegewebszellen) von Mäusen und die Entstehungsfragestellung der Prozesse, angeführt. Diese werden mithilfe der erlangten Ergebnisse untersucht und einige ausgewählte zufällige Dynamiken werden im nachfolgenden Kapitel simuliert. Die Simulationen erfolgen durch ein in Python erstelltes Programm und werden mithilfe der Inversionsmethode realisiert. Die Simulationen stellen beispielhaft die Entwicklungen in den verschiedenen Kritikalitätsfällen der Prozesse dar. Zudem werden die Häufigkeiten der einzelnen Populationsgrößen in Form von Histogrammen angebracht. Dabei lässt sich der Unterschied zwischen den einzelnen Fällen bestätigen und es wird die Anwendungsmöglichkeit der Bienaymé-Galton-Watson Prozesse bei komplexeren Problemen deutlich. Histogramme bekräftigen, dass die einzelnen Populationsgrößen nur endlich oft vorkommen. Diese Aussage wurde von Galton aufgeworfen und in der Extinktions-Explosions-Dichotomie verwendet. Die dargestellten Erkenntnisse über das Themengebiet und die Betrachtung des Konzeptes werden mit einer didaktischen Analyse abgeschlossen. Die Untersuchung beinhaltet die Berücksichtigung der Fundamentalen Ideen, der Fundamentalen Ideen der Stochastik und der Leitidee „Daten und Zufall“. Dabei ergibt sich, dass in Abhängigkeit der gewählten Perspektive die Anwendung der Bienaymé-Galton-Watson Prozesse innerhalb der Schule plausibel ist und von Vorteil für die Schüler:innen sein kann. Für die Behandlung wird exemplarisch der Rahmenlehrplan für Berlin und Brandenburg analysiert und mit dem Kernlehrplan Nordrhein-Westfalens verglichen. Die Konzeption des Lehrplans aus Berlin und Brandenburg lässt nicht den Schluss zu, dass die Bienaymé-Galton-Watson Prozesse angewendet werden sollten. Es lässt sich feststellen, dass die zugrunde liegende Leitidee nicht vollumfänglich mit manchen Fundamentalen Ideen der Stochastik vereinbar ist. Somit würde eine Modifikation hinsichtlich einer stärkeren Orientierung des Lehrplans an den Fundamentalen Ideen die Anwendung der Prozesse ermöglichen. Die Aussage wird durch die Betrachtung und Übertragung eines nordrhein-westfälischen Unterrichtsentwurfes für stochastische Prozesse auf die Bienaymé-Galton-Watson Prozesse unterstützt. Darüber hinaus werden eine Concept Map und ein Vernetzungspentagraph nach von der Bank konzipiert um diesen Aspekt hervorzuheben.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points X-i , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
We study travelling chimera states in a ring of nonlocally coupled heterogeneous (with Lorentzian distribution of natural frequencies) phase oscillators. These states are coherence-incoherence patterns moving in the lateral direction because of the broken reflection symmetry of the coupling topology. To explain the results of direct numerical simulations we consider the continuum limit of the system. In this case travelling chimera states correspond to smooth travelling wave solutions of some integro-differential equation, called the Ott–Antonsen equation, which describes the long time coarse-grained dynamics of the oscillators. Using the Lyapunov–Schmidt reduction technique we suggest a numerical approach for the continuation of these travelling waves. Moreover, we perform their linear stability analysis and show that travelling chimera states can lose their stability via fold and Hopf bifurcations. Some of the Hopf bifurcations turn out to be supercritical resulting in the observation of modulated travelling chimera states.