TY - GEN A1 - Zöller, Gert A1 - Holschneider, Matthias T1 - Reply to “Comment on ‘The Maximum Possible and the Maximum Expected Earthquake Magnitude for Production‐Induced Earthquakes at the Gas Field in Groningen, The Netherlands’ by Gert Zöller and Matthias Holschneider” by Mathias Raschke T2 - Bulletin of the Seismological Society of America Y1 - 2018 U6 - https://doi.org/10.1785/0120170131 SN - 0037-1106 SN - 1943-3573 VL - 108 IS - 2 SP - 1029 EP - 1030 PB - Seismological Society of America CY - Albany ER - TY - JOUR A1 - Zöller, Gert A1 - Hainzl, Sebastian A1 - Tilmann, Frederik A1 - Woith, Heiko A1 - Dahm, Torsten T1 - Comment on: Wikelski, Martin; Müller, Uschi; Scocco, Paola; Catorci, Andrea; Desinov, Lev V.; Belyaev, Mikhail Y.; Keim, Daniel A.; Pohlmeier, Winfried; Fechteler, Gerhard; Mai, Martin P. : Potential short-term earthquake forecasting by farm animal monitoring. - Ethology. - 126 (2020), 9. - S. 931 - 941. -ISSN 0179-1613. - eISSN 1439-0310. - doi 10.1111/eth.13078 JF - Ethology N2 - Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power. KW - animal behavior KW - earthquake precursor KW - error diagram KW - prediction KW - randomness KW - statistics Y1 - 2020 U6 - https://doi.org/10.1111/eth.13105 SN - 0179-1613 SN - 1439-0310 VL - 127 IS - 3 SP - 302 EP - 306 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Zöller, Gert T1 - A statistical model for earthquake recurrence based on the assimilation of paleoseismicity, historic seismicity, and instrumental seismicity JF - Journal of geophysical research : Solid earth N2 - Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process. KW - statistical seismology KW - paleoearthquakes KW - stochastic models KW - seismic hazard Y1 - 2018 U6 - https://doi.org/10.1029/2017JB015099 SN - 2169-9313 SN - 2169-9356 VL - 123 IS - 6 SP - 4906 EP - 4921 PB - American Geophysical Union CY - Washington ER - TY - THES A1 - Ziese, Ramona T1 - Geometric electroelasticity T1 - Geometrische Elektroelastizität N2 - In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space. N2 - In der vorliegenden Arbeit wird eine diffentialgeometrische Formulierung der Elektroelastizitätstheorie entwickelt, die auch thermische und magnetische Einflüsse berücksichtigt. Hierbei wird die Bewegung von Körpern untersucht, die aus einem elastischen Material bestehen und sich durch mechanische Kräfte, Wärmezufuhr und den Einfluss eines äußeren elektromagnetischen Feldes verformen. Dazu werden physikalische Bilanzgleichungen (Massenerhaltung, Impuls-, Drehimpuls- und Energiebilanz) aufgestellt, um mit deren Hilfe eine Gleichung zu formulieren, die die Bewegung des Körpers während der Deformation beschreibt. Dabei werden sowohl der Körper als auch der umgebende Raum als Riemannsche Mannigfaltigkeiten modelliert, wobei zugelassen ist, dass der Körper eine geringere Dimension hat als der ihn umgebende Raum. Auf diese Weise kann man nicht nur - wie sonst üblich - die Deformation dreidimensionaler Körper im dreidimensionalen euklidischen Raum beschreiben, sondern auch die Deformation von Membranen und die Deformation innerhalb eines gekrümmten Raums. Weiterhin werden sogenannte konstitutive Gleichungen formuliert, die die Eigenschaften des verwendeten Materials kodieren. Die Energiebilanz ist eine skalare Gleichung und kann daher leicht auf Riemannschen Mannigfaltigkeiten formuliert werden. Es wird gezeigt, dass die Forderung der Invarianz der Energiebilanz unter der Wirkung von beliebigen Diffeomorphismen auf den umgebenden Raum bereits die restlichen Bilanzgleichungen impliziert. Das verallgemeinert ein Resultat von Marsden und Hughes, das nur für Körper anwendbar ist, die die selbe Dimension wie der umgebende Raum haben und keine elektromagnetischen Felder berücksichtigt. Üblicherweise wird in Arbeiten über Elektroelastizität die Entropieungleichung verwendet, um zu entscheiden, welche Deformationen physikalisch zulässig sind und welche nicht. Sie wird außerdem verwendet, um Einschränkungen für die möglichen Formen von konstitutiven Gleichungen, die das Material beschreiben, herzuleiten. Leider gehen die Meinungen über die physikalisch korrekte Formulierung der Entropieungleichung auseinander sobald elektromagnetische Felder beteiligt sind. Weiterhin ist unklar, wie die Entropieungleichung für den Fall einer Membran, die einem elektromagnetischen Feld ausgesetzt ist, formuliert werden muss. Daher zeigen wir, dass die Benutzung der Entropieungleichung ersetzt werden kann durch die Forderung, dass für einen gegebenen Prozess die Energiebilanz invariant ist unter der Wirkung eines beliebigen Diffeomorphimus' auf den umgebenden Raum und der linearen Reskalierung der Temperatur. Zum einen liefert diese Forderung die gewünschten Einschränkungen für die Form der konstitutiven Gleichungen, zum anderen benoetigt sie viel schwächere Annahmen als die übliche Argumentation mit der Entropieungleichung, die man in der Physikliteratur findet. Unser Resultat ist dabei wieder eine Verallgemeinerung eines Theorems von Marsden und Hughes, wobei es, so wie deren Resultat, nur für Körper gilt, die als offene Teilmengen des dreidimensionalen euklidischen Raums modelliert werden können. KW - Elastizität KW - Elektrodynamik KW - Mannigfaltigkeit KW - konstitutive Gleichungen KW - Bewegungsgleichung KW - elasticity KW - electrodynamics KW - manifold KW - constitutive relations KW - equation of motion Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-72504 ER - TY - BOOK A1 - Zhuchok, Anatolii V. T1 - Relatively free doppelsemigroups N2 - A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids. In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized. The book addresses graduate students, post-graduate students, researchers in algebra and interested readers. N2 - Eine Doppelalgebra ist eine auf einem Vektorraum definierte Algebra mit zwei binären linearen assoziativen Operationen. Doppelalgebren spielen eine herausragende Rolle in der algebraischen K-Theorie. Wir betrachten Doppelhalbgruppen, d.h Mengen mit zwei binären assoziativen Operationen, welche die Axiome der Doppelhalbgruppe erfüllen. Doppelhalbgruppen sind Veralgemeinerungen von Halbgruppen und sie stehen in Beziehung zu solchen algebraischen Strukturen wie interassoziative Halbgruppen, restriktive Bihalbgruppen, Dimonoiden und Trioden. In dieser Lecture Notes werden eine Vielzahl von Beispielen für Doppelhalbgruppen und strong Doppelhalbgruppen gegeben. Die Unabhängigkeit der Axiome für Doppelhalbgruppen wird nachgewiesen. Ein freies Produkt in der Varietät der Doppelhalbgruppen wird vorgestellt. Wir konstruieren auch eine freie (kommutative) strong Doppelhalbgruppe, eine freie n-dinilpotent (strong) Doppelhalbgruppe und eine freie Links n-dinilpotent Doppelhalbgruppe. Darüber hinaus werden die kleinste n-nilpotente Kogruenz, die kleinste n-dinilpotente Kongruenz auf der freien (strong) Doppelhalbgruppe und die kleinste n-dinilpotente Kongruenz auf einer freien Doppelhalbgruppe charakterisiert. Das Buch richtet sich an Graduierte, Doktoranden, Forscher in Algebra und interessierte Leser. T3 - Lectures in pure and applied mathematics - 5 KW - doppelsemigroup KW - interassociativity KW - free algebra KW - semigroup KW - congruence Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-407719 SN - 978-3-86956-427-2 SN - 2199-4951 SN - 2199-496X IS - 5 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - INPR A1 - Zehmisch, René T1 - Über Waldidentitäten der Brownschen Bewegung N2 - Aus dem Inhalt: 1 Abraham Wald (1902-1950) 2 Einführung der Grundbegriffe. Einige technische bekannte Ergebnisse 2.1 Martingal und Doob-Ungleichung 2.2 Brownsche Bewegung und spezielle Martingale 2.3 Gleichgradige Integrierbarkeit von Prozessen 2.4 Gestopptes Martingal 2.5 Optionaler Stoppsatz von Doob 2.6 Lokales Martingal 2.7 Quadratische Variation 2.8 Die Dichte der ersten einseitigen Überschreitungszeit der Brown- schen Bewegung 2.9 Waldidentitäten für die Überschreitungszeiten der Brownschen Bewegung 3 Erste Waldidentität 3.1 Burkholder, Gundy und Davis Ungleichungen der gestoppten Brown- schen Bewegung 3.2 Erste Waldidentität für die Brownsche Bewegung 3.3 Verfeinerungen der ersten Waldidentität 3.4 Stärkere Verfeinerung der ersten Waldidentität für die Brown- schen Bewegung 3.5 Verfeinerung der ersten Waldidentität für spezielle Stoppzeiten der Brownschen Bewegung 3.6 Beispiele für lokale Martingale für die Verfeinerung der ersten Waldidentität 3.7 Überschreitungszeiten der Brownschen Bewegung für nichtlineare Schranken 4 Zweite Waldidentität 4.1 Zweite Waldidentität für die Brownsche Bewegung 4.2 Anwendungen der ersten und zweitenWaldidentität für die Brown- schen Bewegung 5 Dritte Waldidentität 5.1 Dritte Waldidentität für die Brownsche Bewegung 5.2 Verfeinerung der dritten Waldidentität 5.3 Eine wichtige Voraussetzung für die Verfeinerung der drittenWal- didentität 5.4 Verfeinerung der dritten Waldidentität für spezielle Stoppzeiten der Brownschen Bewegung 6 Waldidentitäten im Mehrdimensionalen 6.1 Erste Waldidentität im Mehrdimensionalen 6.2 Zweite Waldidentität im Mehrdimensionalen 6.3 Dritte Waldidentität im Mehrdimensionalen 7 Appendix T3 - Mathematische Statistik und Wahrscheinlichkeitstheorie : Preprint - 2008, 04 Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-49469 ER - TY - BOOK A1 - Zass, Alexander A1 - Zagrebnov, Valentin A1 - Sukiasyan, Hayk A1 - Melkonyan, Tatev A1 - Rafler, Mathias A1 - Poghosyan, Suren A1 - Zessin, Hans A1 - Piatnitski, Andrey A1 - Zhizhina, Elena A1 - Pechersky, Eugeny A1 - Pirogov, Sergei A1 - Yambartsev, Anatoly A1 - Mazzonetto, Sara A1 - Lykov, Alexander A1 - Malyshev, Vadim A1 - Khachatryan, Linda A1 - Nahapetian, Boris A1 - Jursenas, Rytis A1 - Jansen, Sabine A1 - Tsagkarogiannis, Dimitrios A1 - Kuna, Tobias A1 - Kolesnikov, Leonid A1 - Hryniv, Ostap A1 - Wallace, Clare A1 - Houdebert, Pierre A1 - Figari, Rodolfo A1 - Teta, Alessandro A1 - Boldrighini, Carlo A1 - Frigio, Sandro A1 - Maponi, Pierluigi A1 - Pellegrinotti, Alessandro A1 - Sinai, Yakov G. ED - Roelly, Sylvie ED - Rafler, Mathias ED - Poghosyan, Suren T1 - Proceedings of the XI international conference stochastic and analytic methods in mathematical physics N2 - The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018. The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan. A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here. T3 - Lectures in pure and applied mathematics - 6 KW - statistical mechanics KW - random point processes KW - stochastic analysis Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-459192 SN - 978-3-86956-485-2 SN - 2199-4951 SN - 2199-496X IS - 6 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Zass, Alexander T1 - A multifaceted study of marked Gibbs point processes T1 - Facetten von markierten Gibbsschen Punktprozessen N2 - This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation. We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight. We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution. Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation. Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry. N2 - Diese Arbeit konzentriert sich auf die Untersuchung von markierten Gibbs-Punkt-Prozessen und stellt insbesondere einige Ergebnisse zu deren Existenz und Eindeutigkeit vor. Dabei werden Ideen und Techniken aus verschiedenen Bereichen der statistischen Mechanik verwendet: die Entropie-Methode aus der Theorie der großen Abweichungen, die Cluster-Expansion und die Kirkwood-Salsburg-Gleichungen, das Dobrushin-Kontraktionsprinzip und die Disagreement-Perkolation. Wir präsentieren zunächst ein Existenzergebnis für unendlich-volumige markierte Gibbs-Punkt-Prozesse. Genauer gesagt verwenden wir die sogenannte Entropie-Methode (und Werkzeuge der großen Abweichung), um markierte Gibbs-Punkt-Prozesse in R^d unter möglichst allgemeinen Annahmen zu konstruieren. Insbesondere gehören die zufälligen Markierungen zu einem allgemeinen normierten Raum und sind nicht beschränkt. Außerdem lassen wir Interaktionsfunktionale zu, die unbeschränkt sein können und deren Reichweite endlich, aber zufällig ist. Die Entropie-Methode beruht darauf, zu zeigen, dass eine Familie von endlich-volumigen Gibbs-Punkt-Prozessen zu sequentiell kompakten Entropie-Niveau-Mengen gehört, und daher dicht ist. Wir stellen dann unendlich-dimensionale Langevin-Diffusionen vor, die wir über eine Gibbssche Beschreibung in Wechselwirkung setzen. In dieser Umgebung sind wir in der Lage, das vorangehend vorgestellte allgemeine Ergebnis anzupassen, um die Existenz des zugehörigen unendlich-dimensionalen Maßes zu zeigen. Wir untersuchen auch seine Korrelationsfunktionen über Cluster-Expansions Techniken und erhalten die Eindeutigkeit des Gibbs-Prozesses für alle inversen Temperaturen β und Aktivitäten z unterhalb einer bestimmten Schwelle. Diese Methode beruht darauf, zunächst zu zeigen, dass die Korrelationsfunktionen des Prozesses eine so genannte Ruelle-Schranke erfüllen, um diese dann zur Lösung eines Fixpunktproblems in einem geeigneten Banach-Raum zu verwenden. Der Eindeutigkeitsbereich, den wir erhalten, wird dann aus den Modellparametern z und β definiert, für die ein solches Problem genau eine Lösung hat. Schließlich untersuchen wir die Frage nach der Eindeutigkeit von unendlich-volumigen Gibbs-Punkt-Prozessen auf R^d im unmarkierten Fall weiter. Im Zusammenhang mit repulsiven Wechselwirkungen basierend auf einer Hartkernkomponente stellen wir einen neuen Ansatz zur Eindeutigkeit vor, indem wir das diskrete Dobrushin-Kriterium im kontinuierlichen Rahmen anwenden. Wir legen zunächst einen Diskretisierungsparameter a>0 fest und untersuchen dann das Verhalten des Bereichs der Eindeutigkeit, wenn a gegen 0 geht. Mit dieser Technik sind wir in der Lage, explizite Schwellenwerte für die Parameter z und β zu erhalten, die wir dann mit bestehenden Ergebnissen aus den verschiedenen Methoden der Cluster-Expansion und der Disagreement-Perkolation vergleichen. In dieser Arbeit illustrieren wir unsere theoretischen Ergebnisse mit verschiedenen Beispielen sowohl aus der klassischen statistischen Mechanik als auch aus der stochastischen Geometrie. KW - marked Gibbs point processes KW - Langevin diffusions KW - Dobrushin criterion KW - Entropy method KW - Cluster expansion KW - Kirkwood--Salsburg equations KW - DLR equations KW - Markierte Gibbs-Punkt-Prozesse KW - Entropiemethode KW - Cluster-Expansion KW - DLR-Gleichungen KW - Dobrushin-Kriterium KW - Kirkwood-Salsburg-Gleichungen KW - Langevin-Diffusions Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-512775 ER - TY - THES A1 - Zadorozhnyi, Oleksandr T1 - Contributions to the theoretical analysis of the algorithms with adversarial and dependent data N2 - In this work I present the concentration inequalities of Bernstein's type for the norms of Banach-valued random sums under a general functional weak-dependency assumption (the so-called $\cC-$mixing). The latter is then used to prove, in the asymptotic framework, excess risk upper bounds of the regularised Hilbert valued statistical learning rules under the τ-mixing assumption on the underlying training sample. These results (of the batch statistical setting) are then supplemented with the regret analysis over the classes of Sobolev balls of the type of kernel ridge regression algorithm in the setting of online nonparametric regression with arbitrary data sequences. Here, in particular, a question of robustness of the kernel-based forecaster is investigated. Afterwards, in the framework of sequential learning, the multi-armed bandit problem under $\cC-$mixing assumption on the arm's outputs is considered and the complete regret analysis of a version of Improved UCB algorithm is given. Lastly, probabilistic inequalities of the first part are extended to the case of deviations (both of Azuma-Hoeffding's and of Burkholder's type) to the partial sums of real-valued weakly dependent random fields (under the type of projective dependence condition). KW - Machine learning KW - nonparametric regression KW - kernel methods KW - regularisation KW - concentration inequalities KW - learning rates KW - sequential learning KW - multi-armed bandits KW - Sobolev spaces Y1 - 2021 ER - TY - JOUR A1 - Wormell, Caroline L. A1 - Reich, Sebastian T1 - Spectral convergence of diffusion maps BT - Improved error bounds and an alternative normalization JF - SIAM journal on numerical analysis / Society for Industrial and Applied Mathematics N2 - Diffusion maps is a manifold learning algorithm widely used for dimensionality reduction. Using a sample from a distribution, it approximates the eigenvalues and eigenfunctions of associated Laplace-Beltrami operators. Theoretical bounds on the approximation error are, however, generally much weaker than the rates that are seen in practice. This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus. For the data sampling (variance) component of the error we make spatially localized compact embedding estimates on certain Hardy spaces; we study the deterministic (bias) component as a perturbation of the Laplace-Beltrami operator's associated PDE and apply relevant spectral stability results. Using these approaches, we match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretization. We also introduce an alternative normalization for diffusion maps based on Sinkhorn weights. This normalization approximates a Langevin diffusion on the sample and yields a symmetric operator approximation. We prove that it has better convergence compared with the standard normalization on flat domains, and we present a highly efficient rigorous algorithm to compute the Sinkhorn weights. KW - diffusion maps KW - graph Laplacian KW - Sinkhorn problem KW - kernel methods Y1 - 2021 U6 - https://doi.org/10.1137/20M1344093 SN - 0036-1429 SN - 1095-7170 VL - 59 IS - 3 SP - 1687 EP - 1734 PB - Society for Industrial and Applied Mathematics CY - Philadelphia ER - TY - GEN A1 - Wiljes, Jana de A1 - Tong, Xin T. T1 - Analysis of a localised nonlinear ensemble Kalman Bucy filter with complete and accurate observations T2 - Postprints der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe N2 - Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 1221 KW - data assimilation KW - stability and accuracy KW - dimension independent bound KW - localisation KW - high dimensional KW - filter KW - nonlinear Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-540417 SN - 1866-8372 VL - 33 IS - 9 SP - 4752 EP - 4782 PB - IOP Publ. CY - Bristol ER - TY - JOUR A1 - Wiljes, Jana de A1 - Tong, Xin T. T1 - Analysis of a localised nonlinear ensemble Kalman Bucy filter with complete and accurate observations JF - Nonlinearity N2 - Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests. KW - data assimilation KW - stability and accuracy KW - dimension independent bound KW - localisation KW - high dimensional KW - filter KW - nonlinear Y1 - 2020 U6 - https://doi.org/10.1088/1361-6544/ab8d14 SN - 0951-7715 SN - 1361-6544 VL - 33 IS - 9 SP - 4752 EP - 4782 PB - IOP Publ. CY - Bristol ER - TY - THES A1 - Wichitsa-nguan, Korakot T1 - Modifications and extensions of the logistic regression and Cox model T1 - Modifikationen und Erweiterungen des logistischen Regressionsmodells und des Cox-Modells N2 - In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process). In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator. Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented. Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be λ(t|x) = λ0(t) exp(β^Tx) where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient. In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed. Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model. References P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982. D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972. R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986. N2 - In vielen statistischen Anwendungen besteht die Aufgabe darin, die Beziehung zwischen Einflussgrößen und einer Zielgröße zu modellieren. Die Wahl eines geeigneten Modells hängt vom Typ der Zielgröße und vom Ziel der Untersuchung ab - während lineare Modelle für die Beschreibung des Zusammenhanges stetiger Outputs und Einflussgrößen genutzt werden, dienen logistische Regressionsmodelle zur Modellierung binärer Zielgrößen und das Cox-Modell zur Modellierung von Lebendauer-Daten. In epidemiologischen, medizinischen, biologischen, sozialen und ökonomischen Studien wird oftmals die logistische Regression angewendet, um den Zusammenhang zwischen einer binären Zielgröße und den erklärenden Variablen, den Kovariaten, zu modellieren. In epidemiologischen Studien muss häufig eine große Anzahl von Individuen für eine lange Zeit beobachtet werden. Um hierbei Kosten zu reduzieren, wird ein "Case-Cohort-Design" angewendet. Hierbei werden die Einflussgrößen nur für die Individuen erfasst, für die das interessierende Ereignis eintritt, und für eine zufällig gewählte kleine Teilmenge von Individuen, die Subkohorte. In der vorliegenden Arbeit wird das Schätzen im logistischen Regressionsmodell unter Case-Cohort-Design betrachtet. Für den Fall, dass auch die Kovariate binär ist, wurde bereits von Prentice (1986) die asymptotische Normalität des Maximum-Likelihood-Schätzers für den Logarithmus des "odds ratio", einen Parameter, der den Effekt der Kovariate charakterisiert, angegeben. In dieser Arbeit wird über einen Maximum-Likelihood-Zugang ein Schätzer für die Varianz der Grenzverteilung hergeleitet, für den durch empirische Untersuchungen gezeigt wird, dass er dem von Prentice überlegen ist. Ausgehend von dem binärem Kovariate-Modell werden Maximum-Likelihood-Schätzer für logistische Regressionsmodelle mit diskreten Kovariaten unter Case-Cohort-Design hergeleitet. Die asymptotische Normalität wird gezeigt; darauf aufbauend können Formeln für die Standardfehler angegeben werden. Simulationsstudien ergänzen diesen Abschnitt. Sie zeigen den Einfluss des Umfanges der Subkohorte auf die Varianz der Schätzer. Logistische Regression ist geeignet, wenn man das interessierende Ereignis für alle Individuen beobachten kann und wenn man ein festes Zeitintervall betrachtet. Will man die Zeit bis zum Eintreten eines Ereignisses bei der Untersuchung der Wirkung der Kovariate berücksichtigen, so sind Lebensdauermodelle angemessen. Hierbei können auch zensierte Daten behandelt werden. Ein sehr häufig angewendetes Regressionsmodell ist das von Cox (1972) vorgeschlagene, bei dem die Hazardrate durch λ(t|x) = λ0(t) exp(β^Tx) definiert ist. Hierbei ist λ0(t) eine unspezifizierte Baseline-Hazardrate und X ist ein Kovariat-Vektor, β ist ein p-dimensionaler Koeffizientenvektor. Nachdem ein Überblick über das Schätzen und Testen im Cox-Modell und seinen Erweiterungen gegeben wird, werden Aussagen zur Schätzbarkeit des Parameters β durch die "partiallikelihood"- Methode hergeleitet. Grundlage hierzu sind neue Darstellungen der beobachteten Fisher-Information, die die Ergebnisse von Andersen and Gill (1982) erweitern. Unter Regularitätsbedingungen ist der Schätzer asymptotisch normal; die Inverse der Grenzmatrix der Fisher-Information ist die Varianzmatrix der Grenzverteilung. Bedingungen für die Nichtsingularität dieser Grenzmatrix führen zum Begriff der asymptotischen Schätzbarkeit, der in der vorliegenden Arbeit ausführlich untersucht wird. Darüber hinaus ist diese Matrix Grundlage für die Herleitung lokal optimaler Kovariate. In einer Sensitivitätsanalyse wird die Effizienz gewählter Kovariate berechnet. Die Berechnungen zeigen, dass die Baseline-Verteilung nur wenig Einfluss auf die Effizienz hat. Entscheidend ist die Wahl der Kovariate. Es werden einige Vorschläge für anwendbare optimale Kovariate und Berechnungsverfahren für das Auffinden optimaler Kovariate diskutiert. Eine Erweiterung des Cox-Modells besteht darin, zeitabhängige Koeffizienten zuzulassen. Da diese Koeffizientenfunktionen nicht näher spezifiziert sind, werden sie nichtparametrisch geschätzt. Eine mögliche Methode ist die "local-linear-partial-likelihood"-Methode, deren Eigenschaften beispielsweise in der Arbeit von Cai and Sun (2003) untersucht wurden. In der vorliegenden Arbeit werden Simulationen zu dieser Methode durchgeführt. Hauptaspekt ist das Testen der Koeffizientenfunktion. Getestet wird, ob diese Funktion eine bestimmte parametrische Form besitzt. Betrachtet wird der Score-Vektor, der von der "localconstant-partial-likelihood"-Funktion abgeleitet wird. Ausgehend von der asymptotischen Normalität dieses Vektors an verschiedenen Gitterpunkten kann gezeigt werden, dass die Verteilung der geeignet standardisierten quadratischen Form unter der Nullhypothese gegen eine Chi-Quadrat-Verteilung konvergiert. Die Eigenschaften des auf dieser Grenzverteilungsaussage aufbauenden Tests hängen nicht nur vom Stichprobenumfang, sondern auch vom verwendeten Glättungsparameter ab. Deshalb ist es sinnvoll, auch einen Bootstrap-Test zu betrachten. In der vorliegenden Arbeit wird ein Bootstrap-Test zum Testen der Hypothese, dass die Koeffizienten-Funktion konstant ist, d.h. dass das klassische Cox-Modell vorliegt, vorgeschlagen. Der Algorithmus wird angegeben. Simulationen zum Verhalten dieses Tests unter der Nullhypothese und einer speziellen Alternative werden durchgeführt. Literatur P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large sample study. Ann. Statist., 10(4):1100{1120, 1982. Z. Cai and Y. Sun. Local linear estimation for time-dependent coefficients in Cox's regression models. Scand. J. Statist., 30(1):93-111, 2003. D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187-220, 1972. R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1-11, 1986. KW - survival analysis KW - Cox model KW - logistic regression analysis KW - logistische Regression KW - Case-Cohort-Design KW - Cox-Modell Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-90033 ER - TY - JOUR A1 - Wicha, Sebastian G. A1 - Huisinga, Wilhelm A1 - Kloft, Charlotte T1 - Translational pharmacometric evaluation of typical antibiotic broad-spectrum combination therapies against staphylococcus aureus exploiting in vitro information JF - CPT: pharmacometrics & systems pharmacology N2 - Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER. Y1 - 2017 U6 - https://doi.org/10.1002/psp4.12197 SN - 2163-8306 VL - 6 SP - 512 EP - 522 PB - Wiley CY - Hoboken ER - TY - GEN A1 - Weisser, Karin A1 - Stübler, Sabine A1 - Matheis, Walter A1 - Huisinga, Wilhelm T1 - Towards toxicokinetic modelling of aluminium exposure from adjuvants in medicinal products T2 - Regulatory toxicology and pharmacology : official journal of the International Society for Regulatory Toxicology and Pharmacology N2 - As a potentially toxic agent on nervous system and bone, the safety of aluminium exposure from adjuvants in vaccines and subcutaneous immune therapy (SCIT) products has to be continuously reevaluated, especially regarding concomitant administrations. For this purpose, knowledge on absorption and disposition of aluminium in plasma and tissues is essential. Pharmacokinetic data after vaccination in humans, however, are not available, and for methodological and ethical reasons difficult to obtain. To overcome these limitations, we discuss the possibility of an in vitro-in silico approach combining a toxicokinetic model for aluminium disposition with biorelevant kinetic absorption parameters from adjuvants. We critically review available kinetic aluminium-26 data for model building and, on the basis of a reparameterized toxicokinetic model (Nolte et al., 2001), we identify main modelling gaps. The potential of in vitro dissolution experiments for the prediction of intramuscular absorption kinetics of aluminium after vaccination is explored. It becomes apparent that there is need for detailed in vitro dissolution and in vivo absorption data to establish an in vitro-in vivo correlation (IVIVC) for aluminium adjuvants. We conclude that a combination of new experimental data and further refinement of the Nolte model has the potential to fill a gap in aluminium risk assessment. (C) 2017 Elsevier Inc. All rights reserved. KW - Aluminium KW - Aluminium adjuvants KW - Absorption kinetics KW - Toxicokinetic modelling KW - In vitro dissolution Y1 - 2017 U6 - https://doi.org/10.1016/j.yrtph.2017.02.018 SN - 0273-2300 SN - 1096-0295 VL - 88 SP - 310 EP - 321 PB - Elsevier CY - San Diego ER - TY - GEN A1 - Wallenta, Daniel T1 - A Lefschetz fixed point formula for elliptic quasicomplexes T2 - Postprints der Universität Potsdam : Mathematisch Naturwissenschaftliche Reihe N2 - In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 885 KW - elliptic complexes KW - Fredholm complexes KW - Lefschetz number Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-435471 SN - 1866-8372 IS - 885 SP - 577 EP - 587 ER - TY - THES A1 - Wallenta, Daniel T1 - Sequences of compact curvature T1 - Sequenzen mit kompakter Krümmung N2 - By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott. N2 - Die Theorie der Sequenzen mit kompakter Krümmung, sogenannter Quasikomplexe, ist eine Verallgemeinerung der Theorie der Fredholm Komplexe. Um ein Verständnis für (Quasi-)Komplexe zu gewinnen, müssen Inhalte aus verschiedenen Teilgebieten der Mathematik kombiniert werden. Alle hierfür wesentlichen Grundlagen sind im ersten Kapitel dieser Dissertation zusammengefasst. Dies betrifft unter anderem gewisse Elemente der Funktionalanalysis und der Differentialgeometrie, sowie die Theorie der klassischen Pseudodifferentialoperatoren. Im zweiten Kapitel wird anschließend die abstrakte Theorie der Quasikomplexe und zugehöriger Quasimorphismen im Kontext der Funktionalanalysis entwickelt. Dabei werden verschiedene Typen von Quasikomplexen und Quasimorphismen klassifiziert, deren Eigenschaften analysiert und Beispiele betrachtet. Ein zentraler Punkt hierbei ist die Lösung des Problems, für welche dieser Objekte sich eine besondere charakteristische Zahl, die sogenannte Lefschetz-Zahl, definieren lässt. Die dargestellten Resultate zeigen, dass die in dieser Arbeit gegebene Definition eine natürliche Erweiterung der klassischen Lefschetz-Zahl darstellt. Abschließend wird die entwickelte Theorie im dritten Kapitel auf elliptische Quasikomplexe von Pseudodifferentialoperatoren angewendet. Dabei werden insbesondere Verallgemeinerungen der berühmten Atiyah-Singer-Index-Formel und des Lefschetz-Fixpunkt-Theorems von Atiyah and Bott bewiesen. KW - Index Theorie KW - Fredholm Komplexe KW - Elliptische Komplexe KW - Index theory KW - Elliptic complexes KW - Fredholm complexes Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-87489 ER - TY - INPR A1 - Wallenta, Daniel T1 - A Lefschetz fixed point formula for elliptic quasicomplexes N2 - In a recent paper with N. Tarkhanov, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 2(2013)12 KW - Perturbed complexes KW - curvature KW - Lefschetz number KW - fixed point formula Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-67016 ER - TY - THES A1 - Vu, Dinh Phuong T1 - Using video study to investigate eighth-grade mathematics classrooms in Vietnam T1 - Die Nutzung von Videostudien zur Untersuchung des Mathematikunterrichts in der 8. Klasse in Vietnam N2 - The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career. N2 - Das International Project for the Evaluation of Educational Achievement (IEA) wurde in den 1950er Jahren gegründet. Seitdem führte das IEA viele Studien in Bereich mathematischer Bildung durch, insbesondere die First International Mathematics Study (FIMS) im Jahre 1964, die Second International Mathematics Study (SIMS) in den Jahren 1980–1982 und eine Reihe von Studien, die mit der Third International Mathematics and Science Study (TIMSS) begann und seit 1995 alle vier Jahre durchgeführt wird. Nach Stigler et al. (1999) erreichten US-amerikanische Studenten bei FIMS und SIMS niedrigere Ergebnisse als Schüler anderer Länder (S. 1). Daher wurde TIMSS 1995 erweitert um eine ‘Videotape Classroom Study’ mit dem Ziel, „mehr über die unterrichtlichen und kulturellen Prozesse, die mit Leistung zusammenhängen“, zu erfahren (S. 1; Übersetzung vom engl. Original). Von den Ergebnissen der TIMMS 1995 Video Study ausgehend verglichen Stigler und Hiebert (1999) Unterricht mit „Gebirgszügen, die die Wasseroberfläche durchstoßen“, womit sie ausdrücken sollten, was die Bergspitzen sichtbar, große Teile des Gebirges aber unter dem Wasser verborgen sind (S. 73–78; Übersetzung vom engl. Original). Durch die wiederholte Analyse videographierter Unterrichtsstunden aus Deutschland, Japan und den USA entdeckten sie, dass „die Arten des Unterrichts innerhalb jedes Landes von Stunde zu Stunde ähnlich sind. Zumindest gibt es bestimmte wiederkehrende Aspekte [oder Skripte], welche für viele Stunden eines Landes typisch sind und die Stunden gegenüber anderen Ländern abgrenzen“ (S. 77f.). Sie entdeckten außerdem, dass Unterricht eine „kulturelle Aktivität“ ist, Unterrichtsarten also „verstanden werden müssen in Relation zu den kulturellen Überzeugungen und Annahmen, die sie umgeben“ (S. 85, 88). Hierauf aufbauend war es ein Ziel der Dissertation, kulturelle Aspekte des Mathematikunterricht zu untersuchen und die Ergebnisse mit Mathematikunterricht in Vietnam zu vergleichen. Ein weiteres Ziel war die Erhebung der Charakteristika vietnamesischen Mathematikunterricht durch eine Videostudie in Vietnam und der anschließende Vergleich dieser Charakteristika mit denen anderer Länder. Im Einzelnen befasste sich diese Dissertation mit den folgenden Forschungszielen: - Untersuchung der Charakteristika von Lehren und Lernen in unterschiedlichen Kulturen und vorläufiger Vergleich der Resultate mit dem Lehren und Lernen von Mathematik in Vietnam - Einführung der TIMSS und der TIMSS Video Study und der methodologischen Vorteile von Videostudien für die Untersuchung von Mathematikunterricht in Vietnam - Durchführung der Videostudie in Vietnam, um Unterrichtsskripte des Mathematikunterrichts in 8. Klassen in Vietnam zu identifizieren - Vergleich ausgewählter Aspekte des Mathematikunterrichts in Vietnam mit denen anderer Länder auf der Grundlage der Videostudie in Vietnam und Diskussion von Ähnlichkeiten und Unterschieden zwischen Ländern - Untersuchung der Herausforderungen für eine Innovation der Unterrichtsmethoden im Mathematikunterricht Vietnams Diese Dissertation entstand in der Hoffnung, dass sie eine nützliche Referenz für Lehramtsstudenten zum Verständnis der Natur des Unterrichts und zur Entwicklung der eigenen Lehrerpersönlichkeit darstellen möge. KW - Videostudie KW - Mathematikunterricht KW - Unterrichtsmethode KW - TIMSS KW - Kulturelle Aktivität KW - video study KW - mathematics education KW - teaching methods KW - TIMSS KW - Vietnam Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-72464 ER - TY - INPR A1 - Voss, Carola Regine T1 - Harness-Prozesse N2 - Harness-Prozesse finden in der Forschung immer mehr Anwendung. Vor allem gewinnen Harness-Prozesse in stetiger Zeit an Bedeutung. Grundlegende Literatur zu diesem Thema ist allerdings wenig vorhanden. In der vorliegenden Arbeit wird die vorhandene Grundlagenliteratur zu Harness-Prozessen in diskreter und stetiger Zeit aufgearbeitet und Beweise ausgeführt, die bisher nur skizziert waren. Ziel dessen ist die Existenz einer Zerlegung von Harness-Prozessen über Z beziehungsweise R+ nachzuweisen. T3 - Mathematische Statistik und Wahrscheinlichkeitstheorie : Preprint - 2010, 13 Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-49651 ER - TY - JOUR A1 - Vidal-Garcia, Marta A1 - Bandara, Lashi A1 - Keogh, J. Scott T1 - ShapeRotator BT - an R tool for standardized rigid rotations of articulated three-dimensional structures with application for geometric morphometrics JF - Ecology and evolution N2 - The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity. KW - articulation KW - morphology KW - motion correction KW - multi-modular morphology Y1 - 2018 U6 - https://doi.org/10.1002/ece3.4018 SN - 2045-7758 VL - 8 IS - 9 SP - 4669 EP - 4675 PB - Wiley CY - Hoboken ER - TY - INPR A1 - Vasiliev, Serguei A1 - Tarkhanov, Nikolai Nikolaevich T1 - Construction of series of perfect lattices by layer superposition N2 - We construct a new series of perfect lattices in n dimensions by the layer superposition method of Delaunay-Barnes. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016)11 KW - lattice packing and covering KW - polyhedra and polytopes KW - regular figures KW - division of spaces Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-100591 SN - 2193-6943 VL - 5 IS - 11 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Vasiliev, Sergey B. A1 - Tarchanov, Nikolaj Nikolaevič T1 - Construction of series of perfect lattices by layer superposition JF - Journal of Siberian Federal University : Mathematics & physics JF - Žurnal Sibirskogo Federalʹnogo Universiteta = Journal of Siberian Federal University : Serija Matematika i fizika = Mathematics & physics N2 - We construct a new series of perfect lattices in n dimensions by the layer superposition method of Delaunay-Barnes. KW - lattice packing and covering KW - polyhedra and polytopes KW - regular figures KW - division of spaces Y1 - 2017 U6 - https://doi.org/10.17516/1997-1397-2017-10-3-353-361 SN - 1997-1397 SN - 2313-6022 VL - 10 IS - 3 SP - 353 EP - 361 PB - Sibirskij Federalʹnyj Universitet CY - Krasnojarsk ER - TY - BOOK A1 - Van Leeuwen, Peter Jan A1 - Cheng, Yuan A1 - Reich, Sebastian T1 - Nonlinear data assimilation T3 - Frontiers in applied dynamical systems: reviews and tutorials ; 2 N2 - This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters. The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now. Y1 - 2015 SN - 978-3-319-18346-6 SN - 978-3-319-18347-3 U6 - https://doi.org/10.1007/978-3-319-18347-3 PB - Springer CY - Cham ER - TY - CHAP A1 - Valleriani, Angelo A1 - Roelly, Sylvie A1 - Kulik, Alexei Michajlovič ED - Roelly, Sylvie ED - Högele, Michael ED - Rafler, Mathias T1 - Stochastic processes with applications in the natural sciences BT - international workshop at Universidad de los Andes, Bogotá, Colombia T2 - Lectures in pure and applied mathematics N2 - The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics. The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik. A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume. Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam. Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam. Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences. T3 - Lectures in pure and applied mathematics - 4 KW - macromolecular decay KW - Markov processes KW - branching processes KW - long-time behaviour KW - makromolekularer Zerfall KW - Markovprozesse KW - Verzweigungsprozesse KW - Langzeitverhalten Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-401802 SN - 978-3-86956-414-2 SN - 2199-4951 SN - 2199-496X IS - 4 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Trump, Stephanie Sonja T1 - Mathematik in der Physik der Sekundarstufe II!? BT - Eine Benennung notwendiger mathematischer Fertigkeiten für einen flexiblen Umgang mit Mathematik beim Lösen physikalisch-mathematischer Probleme im Rahmen der Schul- und Hochschulbildung sowie eine systematische Analyse zur notwendigen Mathematik in der Physik der Sekundarstufe II Y1 - 2015 ER - TY - THES A1 - Trappmann, Henryk T1 - Arborescent numbers : higher arithmetic operations and division trees T1 - Baumartige Zahlen : höhere arithmetische Operationen und Divisionsbäume N2 - The overall program "arborescent numbers" is to similarly perform the constructions from the natural numbers (N) to the positive fractional numbers (Q+) to positive real numbers (R+) beginning with (specific) binary trees instead of natural numbers. N can be regarded as the associative binary trees. The binary trees B and the left-commutative binary trees P allow the hassle-free definition of arbitrary high arithmetic operations (hyper ... hyperpowers). To construct the division trees the algebraic structure "coppice" is introduced which is a group with an addition over which the multiplication is right-distributive. Q+ is the initial associative coppice. The present work accomplishes one step in the program "arborescent numbers". That is the construction of the arborescent equivalent(s) of the positive fractional numbers. These equivalents are the "division binary trees" and the "fractional trees". A representation with decidable word problem for each of them is given. The set of functions f:R1->R1 generated from identity by taking powers is isomorphic to P and can be embedded into a coppice by taking inverses. N2 - Baumartige Zahlen und höhere arithmetische Operationen Von Schülern und Laienmathematikern wird oft die Frage gestellt, warum nach den Operationen Addition (1. Stufe), Multiplikation (2. Stufe), Potenzieren (3. Stufe) keine Operationen der 4. oder höheren Stufen betrachtet werden. Jede Operation der nächsthöheren Stufe ist die Wiederholung der vorhergehenden Operation, z.B. n * x = x + x + ... + x x^n = x * x * ... * x Das offensichtliche Problem mit der Wiederholung des Potenzierens besteht darin, dass das Potenzieren nicht assoziativ ist und es somit mehrere Klammerungsmöglichkeiten für die Wiederholung dieser Operation gibt. Wählt man eine spezifische Klammerungsmöglichkeit aus, z.B. x^^n = (x^(x^(x^(......)))), gibt es jedoch wieder verschiedene Möglichkeiten, diese Operation auf rationale oder reelle n fortzusetzen. In der Tat kann man im Internet verschiedene solcher Fortsetzungen beschrieben finden und keine scheint besonders ausgezeichnet zu sein. Das ganze Dilemma der verschiedenen Klammerungen kann man jedoch überwinden, in dem man den Zahlenbereich abstrakter macht. So dass statt nur der Anzahl auch eine Klammerungsstruktur in einer Zahl kodiert wird. Die ganz natürliche Verallgemeinerung der natürlichen Zahlen in dieser Hinsicht sind die Binärbäume. Und in der Tat lassen sich die 4. und höhere Operationen in einer eindeutigen Weise auf den Binärbäumen erklären. Vielmehr stellt sich sogar heraus, dass die Binärbäume zu viel Information mit sich tragen, wenn es nur darum geht, die höheren Operationen zu definieren. Es gibt eine Spezialisierung der Binärbäume, die aber allgemeiner als die natürlichen Zahlen (die die assoziative Spezialisierung der Binärbäume sind) ist, und die die passende Informationsmenge zur Definition der höheren Operationen kodiert. Dies sind die so genannten linkskommutativen Binärbäume. Es stellt sich heraus, dass die (linkskommutativen) Binärbäume viele Eigenschaften der natürlichen Zahlen teilen, so z.B. die Assoziativität der Multiplikation (die Operation der 2. Stufe) und eine eindeutige Primzahlzerlegung. Dies motiviert die Frage, ob man die Erweiterungskonstruktionen der Zahlen: „natürliche Zahlen zu gebrochenen Zahlen“ (macht die Multiplikation umkehrbar) „gebrochene Zahlen zu positiven reellen Zahlen“ (macht das Potenzieren umkehrbar und erlaubt Grenzwertbildung) auch ausgehend von (linkskommutativen) Binärbäumen vornehmen kann. In der vorliegenden Arbeit wird (neben unzähligen anderen Resultaten) gezeigt, dass die Zahlenbereichserweiterung „natürliche Zahlen zu gebrochenen Zahlen“ auch analog für (linkskommutative) Binärbäume möglich ist. Das Ergebnis dieser Konstruktion sind die Divisionsbinärbäume (bzw. die gebrochenen Bäume). Letztere lassen sich unerwartet in der Form von Brüchen darstellen, sind jedoch als Verallgemeinerung der gebrochenen Zahlen sehr viel komplexer als dieser. (Das kann man live nachprüfen mit dem dafür erstellten Online-Rechner für gebrochene Bäume (auf englisch): http://math.eretrandre.org/cgi-bin/ftc/ftc.pl ) Damit wird ein Programm „baumartige Zahlen“ gestartet, indem es darum geht, auch die Erweiterung „gebrochene Zahlen zu positiven reellen Zahlen“ für die Divisionsbinärbäume (bzw. die gebrochenen Bäume) durchzuführen, wobei die höheren Operationen auf dieser Erweiterung definiert werden könnten und umkehrbar sein müssten. Ob dies wirklich möglich ist, ist derzeit unklar (neben diversen anderen direkt aus der Dissertation sich ergebenden Fragen) und eröffnet damit ein enorm umfangreiches Feld für weitere Forschungen. KW - Tetration KW - höhere Operationen KW - strukturierte Zahlen KW - Divisionsbäume KW - tetration KW - higher operations KW - structured numbers KW - division trees Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-15247 ER - TY - JOUR A1 - Tomovski, Živorad A1 - Metzler, Ralf A1 - Gerhold, Stefan T1 - Fractional characteristic functions, and a fractional calculus approach for moments of random variables JF - Fractional calculus and applied analysis : an international journal for theory and applications N2 - In this paper we introduce a fractional variant of the characteristic function of a random variable. It exists on the whole real line, and is uniformly continuous. We show that fractional moments can be expressed in terms of Riemann-Liouville integrals and derivatives of the fractional characteristic function. The fractional moments are of interest in particular for distributions whose integer moments do not exist. Some illustrative examples for particular distributions are also presented. KW - Fractional calculus (primary) KW - Characteristic function KW - Mittag-Leffler KW - function KW - Fractional moments KW - Mellin transform Y1 - 2022 U6 - https://doi.org/10.1007/s13540-022-00047-x SN - 1314-2224 VL - 25 IS - 4 SP - 1307 EP - 1323 PB - De Gruyter CY - Berlin ; Boston ER - TY - THES A1 - Tinpun, Kittisak T1 - Relative rank of infinite full transformation semigroups with restricted range Y1 - 2019 ER - TY - JOUR A1 - Thapa, Samudrajit A1 - Park, Seongyu A1 - Kim, Yeongjin A1 - Jeon, Jae-Hyung A1 - Metzler, Ralf A1 - Lomholt, Michael A. T1 - Bayesian inference of scaled versus fractional Brownian motion JF - Journal of physics : A, mathematical and theoretical N2 - We present a Bayesian inference scheme for scaled Brownian motion, and investigate its performance on synthetic data for parameter estimation and model selection in a combined inference with fractional Brownian motion. We include the possibility of measurement noise in both models. We find that for trajectories of a few hundred time points the procedure is able to resolve well the true model and parameters. Using the prior of the synthetic data generation process also for the inference, the approach is optimal based on decision theory. We include a comparison with inference using a prior different from the data generating one. KW - Bayesian inference KW - scaled Brownian motion KW - single particle tracking Y1 - 2022 U6 - https://doi.org/10.1088/1751-8121/ac60e7 SN - 1751-8113 SN - 1751-8121 VL - 55 IS - 19 PB - IOP Publ. Ltd. CY - Bristol ER - TY - INPR A1 - Tarkhanov, Nikolai Nikolaevich A1 - Wallenta, Daniel T1 - The Lefschetz number of sequences of trace class curvature N2 - For a sequence of Hilbert spaces and continuous linear operators the curvature is defined to be the composition of any two consecutive operators. This is modeled on the de Rham resolution of a connection on a module over an algebra. Of particular interest are those sequences for which the curvature is "small" at each step, e.g., belongs to a fixed operator ideal. In this context we elaborate the theory of Fredholm sequences and show how to introduce the Lefschetz number. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1 (2012) 3 KW - Perturbed complexes KW - curvature KW - Lefschetz number Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-56969 ER - TY - INPR A1 - Tarkhanov, Nikolai Nikolaevich T1 - A spectral theorem for deformation quantisation N2 - We present a construction of the eigenstate at a noncritical level of the Hamiltonian function. Moreover, we evaluate the contributions of Morse critical points to the spectral decomposition. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 4 (2015) 4 KW - star product KW - WKB method KW - spectral theorem Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-72425 SN - 2193-6943 VL - 4 IS - 4 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - INPR A1 - Tarkhanov, Nikolai Nikolaevich T1 - A simple numerical approach to the Riemann hypothesis N2 - The Riemann hypothesis is equivalent to the fact the the reciprocal function 1/zeta (s) extends from the interval (1/2,1) to an analytic function in the quarter-strip 1/2 < Re s < 1 and Im s > 0. Function theory allows one to rewrite the condition of analytic continuability in an elegant form amenable to numerical experiments. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1 (2012) 9 Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-57645 SN - 2193-6943 ER - TY - JOUR A1 - Taghvaei, Amirhossein A1 - de Wiljes, Jana A1 - Mehta, Prashant G. A1 - Reich, Sebastian T1 - Kalman filter and its modern extensions for the continuous-time nonlinear filtering problem JF - Journal of dynamic systems measurement and control N2 - This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example. Y1 - 2017 U6 - https://doi.org/10.1115/1.4037780 SN - 0022-0434 SN - 1528-9028 VL - 140 IS - 3 PB - ASME CY - New York ER - TY - THES A1 - Supaporn, Worakrit T1 - Categorical equivalence of clones Y1 - 2014 ER - TY - RPRT A1 - Sultanow, Eldar A1 - Volkov, Denis A1 - Cox, Sean T1 - Introducing a Finite State Machine for processing Collatz Sequences N2 - The present work will introduce a Finite State Machine (FSM) that processes any Collatz Sequence; further, we will endeavor to investigate its behavior in relationship to transformations of a special infinite input. Moreover, we will prove that the machine’s word transformation is equivalent to the standard Collatz number transformation and subsequently discuss the possibilities for use of this approach at solving similar problems. The benefit of this approach is that the investigation of the word transformation performed by the Finite State Machine is less complicated than the traditional number-theoretical transformation. KW - Collatz Conjecture KW - State Machine KW - Graph KW - Double Colored Edges Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-399223 ET - 1st version ER - TY - INPR A1 - Sultanov, Oskar A1 - Kalyakin, Leonid A1 - Tarkhanov, Nikolai Nikolaevich T1 - Elliptic perturbations of dynamical systems with a proper node N2 - The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 3 (2014) 4 KW - dynamical system KW - singular perturbation KW - asymptotic methods Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-70460 SN - 2193-6943 VL - 3 IS - 4 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - GEN A1 - Straube, Arthur V. A1 - Pikovskij, Arkadij T1 - Pattern formation induced by time-dependent advection T2 - Postprints der Universität Potsdam : Mathematisch Naturwissenschaftliche Reihe N2 - We study pattern-forming instabilities in reaction-advection-diffusion systems. We develop an approach based on Lyapunov-Bloch exponents to figure out the impact of a spatially periodic mixing flow on the stability of a spatially homogeneous state. We deal with the flows periodic in space that may have arbitrary time dependence. We propose a discrete in time model, where reaction, advection, and diffusion act as successive operators, and show that a mixing advection can lead to a pattern-forming instability in a two-component system where only one of the species is advected. Physically, this can be explained as crossing a threshold of Turing instability due to effective increase of one of the diffusion constants. T3 - Zweitveröffentlichungen der Universität Potsdam : Mathematisch-Naturwissenschaftliche Reihe - 575 KW - pattern formation KW - reaction-advection-diffusion equation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-413140 SN - 1866-8372 IS - 575 SP - 138-147 ER - TY - GEN A1 - Strader, Anne A1 - Schneider, Max A1 - Schorlemmer, Danijel T1 - Erratum zu: Strader, Anne; Schneider, Max; Schorlemmer, Danijel: Prospective and retrospective evaluation of five-year earthquake forecast models for California (Geophysical Journal International, 211 (2017) 1, S. 239 – 251, https://doi.org/10.1093/gji/ggx268) T2 - Geophysical journal international N2 - S-test results for the USGS and RELM forecasts. The differences between the simulated log-likelihoods and the observed log-likelihood are labelled on the horizontal axes, with scaling adjustments for the 40year.retro experiment. The horizontal lines represent the confidence intervals, within the 0.05 significance level, for each forecast and experiment. If this range contains a log-likelihood difference of zero, the forecasted log-likelihoods are consistent with the observed, and the forecast passes the S-test (denoted by thin lines). If the minimum difference within this range does not contain zero, the forecast fails the S-test for that particular experiment, denoted by thick lines. Colours distinguish between experiments (see Table 2 for explanation of experiment durations). Due to anomalously large likelihood differences, S-test results for Wiemer-Schorlemmer.ALM during the 10year.retro and 40year.retro experiments are not displayed. The range of log-likelihoods for the Holliday-et-al.PI forecast is lower than for the other forecasts due to relatively homogeneous forecasted seismicity rates and use of a small fraction of the RELM testing region. Y1 - 2017 U6 - https://doi.org/10.1093/gji/ggx496 SN - 0956-540X SN - 1365-246X VL - 212 IS - 2 SP - 1314 EP - 1314 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Steup, Martin T1 - Raum und Zahl in der Pflanzenphysiologie JF - Raum und Zahl Y1 - 2015 SN - 978-3-86464-082-7 SP - 77 EP - 109 PB - Trafo CY - Berlin ER - TY - JOUR A1 - Stauffer, Maxime A1 - Mengesha, Isaak A1 - Seifert, Konrad A1 - Krawczuk, Igor A1 - Fischer, Jens A1 - Serugendo, Giovanna Di Marzo T1 - A computational turn in policy process studies BT - coevolving network dynamics of policy change JF - Complexity N2 - The past three decades of policy process studies have seen the emergence of a clear intellectual lineage with regard to complexity. Implicitly or explicitly, scholars have employed complexity theory to examine the intricate dynamics of collective action in political contexts. However, the methodological counterparts to complexity theory, such as computational methods, are rarely used and, even if they are, they are often detached from established policy process theory. Building on a critical review of the application of complexity theory to policy process studies, we present and implement a baseline model of policy processes using the logic of coevolving networks. Our model suggests that an actor's influence depends on their environment and on exogenous events facilitating dialogue and consensus-building. Our results validate previous opinion dynamics models and generate novel patterns. Our discussion provides ground for further research and outlines the path for the field to achieve a computational turn. Y1 - 2022 U6 - https://doi.org/10.1155/2022/8210732 SN - 1076-2787 SN - 1099-0526 VL - 2022 PB - Wiley-Hindawi CY - London ER - TY - JOUR A1 - Staniforth, Andrew A1 - Wood, Nigel A1 - Reich, Sebastian T1 - A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations JF - Quarterly journal of the Royal Meteorological Society N2 - A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes. KW - regularization KW - temporal discretization Y1 - 2006 U6 - https://doi.org/10.1256/qj.06.30 SN - 0035-9009 VL - 132 IS - 621C SP - 3107 EP - 3116 PB - Wiley CY - Weinheim ER - TY - JOUR A1 - Stachanow, Viktoria A1 - Neumann, Uta A1 - Blankenstein, Oliver A1 - Bindellini, Davide A1 - Melin, Johanna A1 - Ross, Richard A1 - Whitaker, Martin J. J. A1 - Huisinga, Wilhelm A1 - Michelet, Robin A1 - Kloft, Charlotte T1 - Exploring dried blood spot cortisol concentrations as an alternative for monitoring pediatric adrenal insufficiency patients BT - a model-based analysis JF - Frontiers in pharmacology N2 - Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling. Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations. In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions. In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples. KW - adrenal insufficiency KW - cortisol KW - dried blood spots KW - pediatrics KW - pharmacokinetics KW - binding KW - association KW - red blood cells Y1 - 2022 U6 - https://doi.org/10.3389/fphar.2022.819590 SN - 1663-9812 VL - 13 PB - Frontiers Media CY - Lausanne ER - TY - JOUR A1 - Somogyvári, Márk A1 - Reich, Sebastian T1 - Convergence tests for transdimensional Markov chains in geoscience imaging JF - Mathematical geosciences : the official journal of the International Association for Mathematical Geosciences N2 - Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization. KW - transdimensional inversion KW - MCMC modelling KW - convergence assessment Y1 - 2019 U6 - https://doi.org/10.1007/s11004-019-09811-x SN - 1874-8961 SN - 1874-8953 VL - 52 IS - 5 SP - 651 EP - 668 PB - Springer CY - Heidelberg ER - TY - THES A1 - Solms, Alexander Maximilian T1 - Integrating nonlinear mixed effects and physiologically–based modeling approaches for the analysis of repeated measurement studies T1 - Integration nicht-linearer gemischter Modelle und physiologie-basierte Modellierung Ansätze in die Auswertung longitudinaler Studien BT - with applications in quantitative pharmacology and quantitative psycholinguistics N2 - During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach. Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input. Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates. Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description, comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described. Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships. A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach. We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development. Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind. N2 - Für die Erforschung und Entwicklung eines neuen Arzneistoffes wird die sichere und wirksame Anwendung in präklinischen und klinischen Studien systematisch untersucht. Ein wichtiger Bestandteil dieser Studien ist die Bestimmung der Pharmakokinetik (PK), da über diese das Wirkungs- und Nebenwirkungsprofil maßgeblich mitbestimmt wird. Um die PK zu bestimmen wird in der Studienpopulation die Wirkstoffkonzentration im Blut wiederholt über die Zeit gemessen. Damit kann sowohl der Konzentrations-Zeit-Verlauf als auch die dazugehörige Variabilität in der Studienpopulation bestimmt werden. Darüber hinaus ist ein weiteres Ziel, die Ursachen dieser Variabilität zu identifizieren. Fär die Auswertung der Daten werden nichtlineare, gemischte Effektmodelle (NLME) eingesetzt. Im Vorfeld der klinischen Studien sind bereits viele Eigenschaften des Wirkstoffes bekannt, da der Wirkstoff-Testung am Menschen die Bestimmung der PK an verschiedenen Tierspezies voraus geht. Auf Basis dieser wirkstoffspezifischen Daten und des Wissens um die spezifische humane Physiologie können mittels mechanistisch physiologiebasierter Modelle Vorhersagen für die humane PK getroffen werden. Bei der Analyse von PK Daten mittels NLME Modellen wird dieses vorhandene Wissen jedoch nicht verwertet. In physiologiebasierten Modellen werden physiologische Prozesse, die die PK bestimmen und beeinflussen können, ber+cksichtigt. Aus mathematischer Sicht sind solche mechanistischen Modelle im Allgemeinen deutlich komplexer als empirisch motivierte Modelle. In der Anwendung kommt es deswegen häufig zu Situationen, in denen die Anzahl der Modellparameter die Anzahl der zugrunde liegenden Beobachtungen übertrifft. Daraus folgt unter anderem, dass die Parameterschätzung, wie sie in empirisch motivierten Modellen genutzt wird, in der Regel unzuverlässig bzw. nicht möglich ist. In Folge dessen werden klinische Daten in der mechanistischen Modellierung meist nur zur Modellqualifizierung genutzt und nicht in die Modell(weiter)entwicklung integriert. Ein weiterer erschwerender Umstand, NLME und PBPK Modelle in der Anwendung zu kombinieren, beruht auch auf der Komplexität des NLME Ansatzes. Obwohl diese Methode seit Jahrzehnten existiert, sind in der Literatur nur ausgewählte Teilstücke der zugrunde liegenden Mathematik beschrieben und hergeleitet; eine lückenlose Beschreibung fehlt. Aus diesem Grund werden in der vorliegenden Arbeit systematisch die Methodik und mathematischen Zusammenhänge des NLME Ansatzes, von der ursprüngliche Idee und Motivation bis zur Parameterschätzung beschrieben. In diesem Kontext werden neue Interpretationen der unterschiedlichen Methoden, die im Rahmen der NLME Modellierung verwendet werden, vorgestellt; zudem werden Gemeinsamkeiten und Unterschiede zwischen diesen herausgearbeitet. Mittels dieser Erkenntnisse wird ein Expectation-Maximization (EM) Algorithmus zur Parameterschätzung in einer NLME Analyse beschrieben. Mittels des neuen EM Algorithmus, kombiniert mit dem Lumping-Ansatz von Pilari und Huisinga (S. Pilari, W. Huisinga, JPKPD Vol. 37(4), 2010.) wird anhand des Antibiotikums Levofloxacin ein neuer konzeptioneller Ansatz entwickelt, der PBPK- und NLME-Modellierung zur Datenanalyse integriert. Die Lumping-Methode definiert hierbei, welche Prozesse von den verfügbaren Daten informiert werden, sie verbessert somit die Robustheit der Parameterschätzung. Weiterhin wird gezeigt, wie a-priori Wissen über Variabilität und Faktoren, die diese beeinflussen, sowie unerklärte Variabilität in das Modell integriert werden können. Ein elementarer Vorteil von PBPK Modellen gegenüber empirisch motivieren PK Modellen besteht in der Möglichkeit, Wirkstoffkonzentrationen innerhalb von Organen und Gewebe im Körper vorherzusagen. So kann das PBPK-Modell für Levofloxacin genutzt werden, um Wirkstoffkonzentrationen innerhalb der Gewebe vorherzusagen, in denen typischerweise Infektionen auftreten. Für Muskel- und Fettgewebe werden die PBPK-Vorhersagen mit Mikrodialyse Gewebemessungen verglichen. Die gute übereinstimmung von PBPK-Modell und Mikrodialyse stellt eine noch nicht vorhanden Validierung des PBPK-Gewebemodells im Menschen dar. In dieser Dissertation wird gezeigt, wie mechanistische PBPK Modelle, die in der Regel in der frühen Phase der Arzneimittelentwicklung entwickelt werden, erfolgreich zur Analyse von klinischen Studien eingesetzt werden können. Das bestehende Wissen über den neuen Wirkstoff wird somit gezielt genutzt und mit klinischen Daten von Probanden oder Patienten aktualisiert. Im Fall von Levofloxacin konnte Variabilität in mechanistischen Prozessen identifiziert und quantifiziert werden. Dieses Vorgehen liefert einen weiteren Beitrag zum learn & confirm Paradigma im Forschungs- und Entwicklungsprozess eines neuen Wirkstoffes. Abschließend wird anhand eines weiteren real world-Beispieles aus dem Bereich der quantitativen Psycholinguistik die Anwendbarkeit und der Nutzen des vorgestellten integrierten Ansatz aus mechanistischer und NLME Modellierung in der Analyse von Blickbewegungsdaten gezeigt. Mittels eines mechanistisch motivierten Modells wird die Komplexität des Experimentes und der Daten abgebildet, wodurch sich neue Interpretationsmöglichkeiten ergeben. KW - NLME KW - PBPK KW - EM KW - lumping KW - popPBPK KW - mechanistic modeling KW - population analysis KW - popPK KW - microdialysis KW - nicht-lineare gemischte Modelle (NLME) KW - physiologie-basierte Pharmacokinetic (PBPK) KW - EM KW - Lumping KW - popPBPK KW - popPK KW - mechanistische Modellierung KW - Populations Analyse KW - Microdialyse Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-397070 ER - TY - JOUR A1 - Sixtus, Elena A1 - Fischer, Martin H. T1 - Eine kognitionswissenschaftliche Betrachtung der Konzepte "Raum" und "Zahl" JF - Raum und Zahl im Fokus der Wissenschaften : eine multidisziplinäre Vorlesungsreihe Y1 - 2015 SN - 978-3-86464-082-7 SP - 35 EP - 62 PB - Trafo CY - Berlin ER - TY - INPR A1 - Shlapunov, Alexander A1 - Tarkhanov, Nikolai Nikolaevich T1 - An open mapping theorem for the Navier-Stokes equations N2 - We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 5 (2016)10 KW - Navier-Stokes equations KW - weighted Hölder spaces KW - integral representation method Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-98687 SN - 2193-6943 VL - 5 IS - 10 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - INPR A1 - Shlapunov, Alexander A1 - Tarkhanov, Nikolai Nikolaevich T1 - Golusin-Krylov Formulas in Complex Analysis T2 - Preprints des Instituts für Mathematik der Universität Potsdam N2 - This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 6 (2017) 2 KW - analytic continuation KW - integral formulas KW - Cauchy problem Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-102774 VL - 6 IS - 2 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Shlapunov, Alexander A1 - Tarkhanov, Nikolai Nikolaevich T1 - Golusin-Krylov formulas in complex analysis JF - Complex variables and elliptic equations N2 - This is a brief survey of a constructive technique of analytic continuation related to an explicit integral formula of Golusin and Krylov (1933). It goes far beyond complex analysis and applies to the Cauchy problem for elliptic partial differential equations as well. As started in the classical papers, the technique is elaborated in generalised Hardy spaces also called Hardy-Smirnov spaces. KW - Analytic continuation KW - inegral formulas KW - Cauchy problem Y1 - 2017 U6 - https://doi.org/10.1080/17476933.2017.1395872 SN - 1747-6933 SN - 1747-6941 VL - 63 IS - 7-8 SP - 1142 EP - 1167 PB - Routledge CY - Abingdon ER - TY - INPR A1 - Shlapunov, Alexander A1 - Tarkhanov, Nikolai Nikolaevich T1 - On completeness of root functions of Sturm-Liouville problems with discontinuous boundary operators N2 - We consider a Sturm-Liouville boundary value problem in a bounded domain D of R^n. By this is meant that the differential equation is given by a second order elliptic operator of divergent form in D and the boundary conditions are of Robin type on bD. The first order term of the boundary operator is the oblique derivative whose coefficients bear discontinuities of the first kind. Applying the method of weak perturbation of compact self-adjoint operators and the method of rays of minimal growth, we prove the completeness of root functions related to the boundary value problem in Lebesgue and Sobolev spaces of various types. T3 - Preprints des Instituts für Mathematik der Universität Potsdam - 1(2012)11 KW - Sturm-Liouville problems KW - discontinuous Robin condition KW - root functions KW - Lipschitz domains Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-57759 SN - 2193-6943 ER -