Refine
Has Fulltext
- yes (4)
Document Type
- Doctoral Thesis (2)
- Bachelor Thesis (1)
- Master's Thesis (1)
Is part of the Bibliography
- yes (4)
Keywords
- Eutrophierung (1)
- Fernerkundung (1)
- GPM (1)
- Holzdichte (1)
- Kalibrierung (1)
- Modelle (1)
- Niederschlagsradar (1)
- Obstkultur (1)
- Philippinen (1)
- Phosphor Bilanz (1)
Institute
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
Der Rangsdorfer See (A = 2,44 km² , z(max) = 6 m, z(mean) = 1,930 m) im Landkreis Teltow Fläming ist einer von vielen Gewässern in Brandenburg, die derzeit den nach EU-Wasserrahmenrichtlinie geforderten guten Zustand nicht erreichen. Bekanntlich gilt Phosphor für viele Gewässer als der bedeutendste produktionslimitierende Nährstoff und ist somit aussichtsreicher Steuerfaktor für eine erfolgreiche Seentherapie.
Ziel dieser Arbeit war es, die Gewässergüte des Rangsdorfer Sees nach trophischen Aspekten zu bewerten, Phosphor-Eintragspfade zu identifizieren, welche die höchsten Frachten verursachen sowie Therapiemaßnahmen zu finden, die eine langfristige Zustandsverbesserung ermöglichen. In einer Szenarioanalyse wurde das modifizierte Einbox Modell angewendet, um die Wirksamkeit externer und interner Therapiemaßnahmen abzuschätzen. Nach Abschluss der Studienarbeiten können folgende Schlüsse gezogen werden:
Der Rangsdorfer See ist aufgrund seiner Morphometrie ein naturgegebenes nährstoffreiches Gewässer und war das auch schon, bevor anthropogene Einflüsse auf ihn einwirkten. Langjährige Nährstoffeinträge verschiedener Herkunft (Abwassereinleitungen, Fischintensivhaltung, Rieselfelder) führten jedoch zu einer übermäßigen Produktivität. Viele Belastungsquellen wurden ausgeschaltet, es findet jedoch immer noch ein relevanter Nährstoffaustrag aus dem Einzugsgebiet statt. Unter Verwendung von Phosphor-Bilanzmodellen und seetypspezifischen kritischen Phosphor-Seekonzentrationen zeigt sich, dass die aktuell stattfindende externe Phosphor-Belastung den kritischen Phosphor-Eintrag zur mutmaßlichen Erreichung des guten ökologischen Zustandes überschreitet. Anteilig die größte Fracht wird über den natürlichen Hauptzufluss in den Rangsdorfer See transportiert. Sanierungsmaßnahmen in dessen Einzugsgebiet stellen ein effektives Mittel dar. Eine technische Lösung zur Nährstoffminderung im Zufluss (Eliminierungsanlage) kann unterstützend eingesetzt werden, muss aber dann bei unveränderter hoher Phosphor-Konzentration im Zufluss dauerhaft betrieben werden. Das Einbox Modell stellte sich als hilfreiches Instrument zur Vorauswahl geeigneter Therapiemaßnahmen heraus.
Die Quantifizierung der Biomasse von Pflanzen mithilfe effizienter Messmethoden hat für verschiedene Wissenschaftsbereiche eine große Bedeutung. Die vorliegende Arbeit soll es ermöglichen, über die einzelbaumbasierte Schätzung der oberirdischen Biomasse einer Apfel- und einer Kirschkultur am Forschungsstandort Marquardt (Potsdam) auf die Menge des in ihr enthaltenen Wasserstoffs zu schließen. Hierzu wurde das Volumen von 13 Kirsch- und 11 Apfelbäumen bestimmt, indem sie in Segmente unterteilt, diese einzeln vermessen und in Durchmesserklassen eingeteilt wurden. Des Weiteren wurden die Dichte der Zweige und die mittlere Laubmasse bestimmt. Zur Berechnung der Biomasse wurde zusätzlich ein Literaturwert der Holzdichte der entsprechenden Baumarten herangezogen. Es wurde die Verteilung der Holzbiomasse auf die einzelnen Durchmesserklassen untersucht und einfach zu erhebende Baumparameter, sowie Daten eines Terrestrischen Laserscanners als Prädiktorvariablen für eine Regressionsanalyse herangezogen. Die experimentell ermittelten Dichtewerte zeigten eine Zunahme mit steigendem Zweigdurchmesser. Dabei wichen sie beim Kirschbaumholz leicht, beim Apfelbaumholz stärker vom Literaturwert ab. Die Erhebungen zur Laubmasse wurden unabhängig von den vermessenen Bäumen gemacht und die Ergebnisse unterlagen großer Varianz, weshalb kein Zusammenhang zwischen Holz- und Laubbiomasse hergestellt und nur durchschnittliche Werte ermittelt werden konnten. Der Anteil der verschiedenen Durchmesserklassen an der Gesamtmasse erwies sich als stark variabel und eine Schätzung der Biomasse aus dem Gewicht weniger kräftiger Baumsegmente daher als nicht geeignet. Eine zuverlässige und effiziente Abschätzung der oberirdischen verholzten Biomasse kann jedoch durch die Anwendung der erstellten Modelle erreicht werden. Für die vorliegende Population gleichaltriger und ähnlich großer Individuen wurden mit einer linearen Regression die besten Ergebnisse erzielt. Während die auf Laserdaten basierenden Variablen kaum mit der Holzbiomasse korrelierten, zeigten lineare Modelle mit dem Stammdurchmesser d bzw. d² als Prädiktor bei beiden Baumarten eine hohe Signifikanz (p - Wert < 0.001) und sehr gute Anpassung (R² > 0.8).
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.