Refine
Year of publication
Document Type
- Postprint (14)
- Article (9)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Review (1)
Keywords
- models (26) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (6)
- Institut für Geowissenschaften (5)
- Institut für Physik und Astronomie (5)
- Institut für Chemie (4)
- Institut für Umweltwissenschaften und Geographie (3)
- Institut für Mathematik (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Extern (1)
- Humanwissenschaftliche Fakultät (1)
This study identified key somatic and demographic characteristics that benefit all swimmers and, at the same time, identified further characteristics that benefit only specific swimming strokes. Three hundred sixty-three competitive-level swimmers (male [n = 202]; female [n = 161]) participated in the study. We adopted a multiplicative, allometric regression model to identify the key characteristics associated with 100 m swimming speeds (controlling for age). The model was refined using backward elimination. Characteristics that benefited some but not all strokes were identified by introducing stroke-by-predictor variable interactions. The regression analysis revealed 7 "common" characteristics that benefited all swimmers suggesting that all swimmers benefit from having less body fat, broad shoulders and hips, a greater arm span (but shorter lower arms) and greater forearm girths with smaller relaxed arm girths. The 4 stroke-specific characteristics reveal that backstroke swimmers benefit from longer backs, a finding that can be likened to boats with longer hulls also travel faster through the water. Other stroke-by-predictor variable interactions (taken together) identified that butterfly swimmers are characterized by greater muscularity in the lower legs. These results highlight the importance of considering somatic and demographic characteristics of young swimmers for talent identification purposes (i.e., to ensure that swimmers realize their most appropriate strokes).
Proposing relevant perturbations to biological signaling networks is central to many problems in biology and medicine because it allows for enabling or disabling certain biological outcomes. In contrast to quantitative methods that permit fine-grained (kinetic) analysis, qualitative approaches allow for addressing large-scale networks. This is accomplished by more abstract representations such as logical networks. We elaborate upon such a qualitative approach aiming at the computation of minimal interventions in logical signaling networks relying on Kleene's three-valued logic and fixpoint semantics. We address this problem within answer set programming and show that it greatly outperforms previous work using dedicated algorithms.
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty.
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.