510 Mathematik
Refine
Year of publication
Document Type
- Article (249)
- Preprint (93)
- Doctoral Thesis (75)
- Postprint (29)
- Monograph/Edited Volume (10)
- Other (10)
- Master's Thesis (6)
- Part of a Book (5)
- Conference Proceeding (5)
- Review (3)
Is part of the Bibliography
- yes (489) (remove)
Keywords
- data assimilation (8)
- regularization (8)
- Bayesian inference (7)
- Dirac operator (6)
- Navier-Stokes equations (6)
- cluster expansion (6)
- discrepancy principle (6)
- index (6)
- Cauchy problem (5)
- Fredholm property (5)
Institute
- Institut für Mathematik (425)
- Institut für Physik und Astronomie (14)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Extern (9)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (7)
- Institut für Biochemie und Biologie (6)
- Institut für Informatik und Computational Science (5)
- Department Psychologie (4)
- Department Grundschulpädagogik (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Institut für Philosophie (3)
- Strukturbereich Kognitionswissenschaften (3)
- Historisches Institut (2)
- Institut für Geowissenschaften (2)
- Präsident | Vizepräsidenten (2)
- Fachgruppe Politik- & Verwaltungswissenschaft (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Juristische Fakultät (1)
- Wirtschaftswissenschaften (1)
We present a project combining lidar, photometer and particle counter data with a regularization software tool for a closure study of aerosol microphysical property retrieval. In a first step only lidar data are used to retrieve the particle size distribution (PSD). Secondly, photometer data are added, which results in a good consistency of the retrieved PSDs. Finally, those retrieved PSDs may be compared with the measured PSD from a particle counter. The data here were taken in Ny Alesund, Svalbard, as an example.
Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs called the rotor-router model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave in a remarkably similar way: Starting with the same initial configuration, the number of tokens in the rotor-router model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer [Combin. Probab. Comput., 15 (2006), pp. 815-822] showed that on Z(d), the single vertex discrepancy is only a constant c(d). Other authors also determined the precise value of c(d) for d = 1, 2. All of these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph Z(d). We show that this assumption is crucial by proving that, otherwise, the single vertex discrepancy can become arbitrarily large. For all dimensions d >= 1 and arbitrary discrepancies l >= 0, we construct configurations that reach a discrepancy of at least l.
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an reproducing kernel Hilbert space (RKHS) framework. The data set of size n is partitioned into m = O (n(alpha)), alpha < 1/2, disjoint subsamples. On each subsample, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression, L-2-boosting and spectral cut-off) is applied. The regression function f is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for alpha) as n -> infinity, depending on the smoothness assumptions on f and the intrinsic dimensionality. In spirit, the analysis relies on a classical bias/stochastic error analysis.
In this chapter, an overview of systematic eradication of basic science foci in European universities in the last two decades is given. This happens under the slogan of optimisation of the university education to the needs and demands of the society. It is pointed out that reliance on “market demands” brings with it long-term deficiencies in the maintenance of basic and advanced knowledge construction in societies necessary for long-term future technological advances. University policies that claim improvement of higher education towards more immediate efficiency may end up with the opposite effect of affecting its quality and long term expected positive impact on society.
Uniformly valid confidence intervals post model selection in regression can be constructed based on Post-Selection Inference (PoSI) constants. PoSI constants are minimal for orthogonal design matrices, and can be upper bounded in function of the sparsity of the set of models under consideration, for generic design matrices. In order to improve on these generic sparse upper bounds, we consider design matrices satisfying a Restricted Isometry Property (RIP) condition. We provide a new upper bound on the PoSI constant in this setting. This upper bound is an explicit function of the RIP constant of the design matrix, thereby giving an interpolation between the orthogonal setting and the generic sparse setting. We show that this upper bound is asymptotically optimal in many settings by constructing a matching lower bound.
We consider composite-composite testing problems for the expectation in the Gaussian sequence model where the null hypothesis corresponds to a closed convex subset C of R-d. We adopt a minimax point of view and our primary objective is to describe the smallest Euclidean distance between the null and alternative hypotheses such that there is a test with small total error probability. In particular, we focus on the dependence of this distance on the dimension d and variance 1/n giving rise to the minimax separation rate. In this paper we discuss lower and upper bounds on this rate for different smooth and non-smooth choices for C.
We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension D. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level (m) over cap is an element of {1, . . . , D} only based on the knowledge of the first (m) over cap singular values and vectors. We analyse in detail whether sequential early stopping rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.
Das Professionswissen von Lehrkräften gehört zu den bedeutendsten Stellschrauben der Bildung an den Schulen. Seine Kernbereiche sind fachwissenschaftliches Wissen und fachdidaktisches Wissen, welche hauptsächlich in der universitären Ausbildung erworben werden.
Die vorliegende Arbeit verfolgt das Ziel, einen Beitrag zur stetigen Verbesserung und Sicherung der Qualität der Lehrerausbildung an der Universität Potsdam zu leisten, und stellt die Frage: Über welches fachwissenschaftliche und fachdidaktische Wissen verfügen die Lehramtsstudierenden im Fach Mathematik nach Besuch der Lehrveranstaltung Arithmetik und ihre Didaktik I und II? Untersucht wurde exemplarisch das Wissen der Lehramtsstudierenden im Bereich der rationalen Zahlen mit dem Fokus auf dem Verständnis der Dichte von Bruchzahlen. Die Dichte stellt eines der am schwierigsten zu erwerbenden Konzepte im Bruchzahlerwerb dar und fordert ein konzeptionelles Umdenken sowie die Reorganisation bereits erworbener Vorstellungen. Um die Forschungsfrage zu beantworten, wurden in einer qualitativen Studie 112 Lehramtsstudierende hinsichtlich ihres Wissens zu dem Thema Dichte von rationalen Zahlen schriftlich getestet. Um Denkprozesse der Studierenden zu verstehen und Denkhürden zu identifizieren, wurden zusätzlich qualitative Interviews in Form von Gruppendiskussionen geführt. Die Daten wurden mithilfe der Qualitativen Inhaltsanalyse computergestützt ausgewertet.
Es zeigte sich eine große Bandbreite verschiedener Wissensbestände. Die Ergebnisse im fachdidaktischen Wissen blieben hinter den Ergebnissen im fachwissenschaftlichen Wissen zurück. Am schwierigsten fiel den Studierenden die Gegenüberstellung von wesentlichen Eigenschaften der rationalen und natürlichen Zahlen auf der metakognitiven Ebene. Neben positiven Ergebnissen, welche für die Effektivität der Konzeption der Lehrveranstaltung sprechen, zeigten sich diverse Denkhürden. Defizite im Fachwissen wie ein mangelndes Verständnis von äquivalenten Brüchen oder Fehler im Erweitern von Brüchen enthüllen unzulänglich ausgebildete Grundvorstellungen im Bereich der rationalen Zahlen seitens der Studierenden. Schwierigkeiten in den fachdidaktischen Aufgaben wie die Formulierung einer kindgerechten Erklärung oder die anschauliche Darstellung des mathematischen Inhalts auf bildlicher Ebene lassen sich ursächlich auf die Defizite im Fachwissen zurückführen. Zusätzlich stellten sich Einschränkungen seitens der Studierenden in der Motivation und Relevanzzuschreibung heraus.
Die Ergebnisse führen zu gezielten Änderungsvorschlägen bezüglich der Konzeption der Lehrveranstaltung. Es wird empfohlen, verschiedene Lernangebote wie Hausaufgaben und wöchentliche Selbsttests zur individuellen Lernzielkontrolle für alle Teilnehmenden der Lehrveranstaltung verpflichtend zu gestalten und motivationale Aspekte verstärkt aufzugreifen. Zusätzlich wird der Ausbau von konkreten Übungen auf der enaktiven Ebene empfohlen, um den Aufbau von notwendigen Grundvorstellungen im Bereich der rationalen Zahlen zu fördern und somit Denkhürden gezielt zu begegnen.
S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region.
We prove that the Atiyah–Singer Dirac operator in L2 depends Riesz continuously on L∞ perturbations of complete metrics g on a smooth manifold. The Lipschitz bound for the map depends on bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius. Our proof uses harmonic analysis techniques related to Calderón’s first commutator and the Kato square root problem. We also show perturbation results for more general functions of general Dirac-type operators on vector bundles.
Understanding and reducing complex systems pharmacology models based on a novel input-response index
(2018)
A growing understanding of complex processes in biology has led to large-scale mechanistic models of pharmacologically relevant processes. These models are increasingly used to study the response of the system to a given input or stimulus, e.g., after drug administration. Understanding the input–response relationship, however, is often a challenging task due to the complexity of the interactions between its constituents as well as the size of the models. An approach that quantifies the importance of the different constituents for a given input–output relationship and allows to reduce the dynamics to its essential features is therefore highly desirable. In this article, we present a novel state- and time-dependent quantity called the input–response index that quantifies the importance of state variables for a given input–response relationship at a particular time. It is based on the concept of time-bounded controllability and observability, and defined with respect to a reference dynamics. In application to the brown snake venom–fibrinogen (Fg) network, the input–response indices give insight into the coordinated action of specific coagulation factors and about those factors that contribute only little to the response. We demonstrate how the indices can be used to reduce large-scale models in a two-step procedure: (i) elimination of states whose dynamics have only minor impact on the input–response relationship, and (ii) proper lumping of the remaining (lower order) model. In application to the brown snake venom–fibrinogen network, this resulted in a reduction from 62 to 8 state variables in the first step, and a further reduction to 5 state variables in the second step. We further illustrate that the sequence, in which a recursive algorithm eliminates and/or lumps state variables, has an impact on the final reduced model. The input–response indices are particularly suited to determine an informed sequence, since they are based on the dynamics of the original system. In summary, the novel measure of importance provides a powerful tool for analysing the complex dynamics of large-scale systems and a means for very efficient model order reduction of nonlinear systems.
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
SmB6 is predicted to be the first member of the intersection of topological insulators and Kondo insulators, strongly correlated materials in which the Fermi level lies in the gap of a many-body resonance that forms by hybridization between localized and itinerant states. While robust, surface-only conductivity at low temperature and the observation of surface states at the expected high symmetry points appear to confirm this prediction, we find both surface states at the (100) surface to be topologically trivial. We find the (Gamma) over bar state to appear Rashba split and explain the prominent (X) over bar state by a surface shift of the many-body resonance. We propose that the latter mechanism, which applies to several crystal terminations, can explain the unusual surface conductivity. While additional, as yet unobserved topological surface states cannot be excluded, our results show that a firm connection between the two material classes is still outstanding.
We give a new and very short proof of a theorem of Greiner asserting that a positive and contractive -semigroup on an -space is strongly convergent in case it has a strictly positive fixed point and contains an integral operator. Our proof is a streamlined version of a much more general approach to the asymptotic theory of positive semigroups developed recently by the authors. Under the assumptions of Greiner's theorem, this approach becomes particularly elegant and simple. We also give an outlook on several generalisations of this result.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k(-2) slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.