Refine
Year of publication
Document Type
- Article (24)
- Postprint (8)
- Other (2)
- Habilitation Thesis (1)
Language
- English (35)
Is part of the Bibliography
- yes (35)
Keywords
- Algorithm (2)
- Band (2)
- Kwajalein (2)
- Methodology (2)
- Reflectivity (2)
- Uncertainties (2)
- Weather (2)
- models (2)
- skill (2)
- weather (2)
Institute
- Institut für Geowissenschaften (35) (remove)
Flood generation in mountainous headwater catchments is governed by rainfall intensities, by the spatial distribution of rainfall and by the state of the catchment prior to the rainfall, e. g. by the spatial pattern of the soil moisture, groundwater conditions and possibly snow. The work presented here explores the limits and potentials of measuring soil moisture with different methods and in different scales and their potential use for flood simulation. These measurements were obtained in 2007 and 2008 within a comprehensive multi-scale experiment in the Weisseritz headwater catchment in the Ore-Mountains, Germany. The following technologies have been applied jointly thermogravimetric method, frequency domain reflectometry (FDR) sensors, spatial time domain reflectometry (STDR) cluster, ground-penetrating radar (GPR), airborne polarimetric synthetic aperture radar (polarimetric SAR) and advanced synthetic aperture radar (ASAR) based on the satellite Envisat. We present exemplary soil measurement results, with spatial scales ranging from point scale, via hillslope and field scale, to the catchment scale. Only the spatial TDR cluster was able to record continuous data. The other methods are limited to the date of over-flights (airplane and satellite) or measurement campaigns on the ground. For possible use in flood simulation, the observation of soil moisture at multiple scales has to be combined with suitable hydrological modelling, using the hydrological model WaSiM-ETH. Therefore, several simulation experiments have been conducted in order to test both the usability of the recorded soil moisture data and the suitability of a distributed hydrological model to make use of this information. The measurement results show that airborne-based and satellite-based systems in particular provide information on the near-surface spatial distribution. However, there are still a variety of limitations, such as the need for parallel ground measurements (Envisat ASAR), uncertainties in polarimetric decomposition techniques (polarimetric SAR), very limited information from remote sensing methods about vegetated surfaces and the non-availability of continuous measurements. The model experiments showed the importance of soil moisture as an initial condition for physically based flood modelling. However, the observed moisture data reflect the surface or near-surface soil moisture only. Hence, only saturated overland flow might be related to these data. Other flood generation processes influenced by catchment wetness in the subsurface such as subsurface storm flow or quick groundwater drainage cannot be assessed by these data. One has to acknowledge that, in spite of innovative measuring techniques on all spatial scales, soil moisture data for entire vegetated catchments are still today not operationally available. Therefore, observations of soil moisture should primarily be used to improve the quality of continuous, distributed hydrological catchment models that simulate the spatial distribution of moisture internally. Thus, when and where soil moisture data are available, they should be compared with their simulated equivalents in order to improve the parameter estimates and possibly the structure of the hydrological model.
In a study from 2008, Lariviere and colleagues showed, for the field of natural sciences and engineering, that the median age of cited references is increasing over time. This result was considered counterintuitive: with the advent of electronic search engines, online journal issues and open access publications, one could have expected that cited literature is becoming younger. That study has motivated us to take a closer look at the changes in the age distribution of references that have been cited in water resources journals since 1965. Not only could we confirm the findings of Lariviere and colleagues. We were also able to show that the aging is mainly happening in the oldest 10-25% of an average reference list. This is consistent with our analysis of top-cited papers in the field of water resources. Rankings based on total citations since 1965 consistently show the dominance of old literature, including text books and research papers in equal shares. For most top-cited old-timers, citations are still growing exponentially. There is strong evidence that most citations are attracted by publications that introduced methods which meanwhile belong to the standard toolset of researchers and practitioners in the field of water resources. Although we think that this trend should not be overinterpreted as a sign of stagnancy, there might be cause for concern regarding how authors select their references. We question the increasing citation of textbook knowledge as it holds the risk that reference lists become overcrowded, and that the readability of papers deteriorates.
In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students reveal benefits, such as better orientation in the study area, higher interactivity with the data, improved discourse among students and enhanced motivation through immersive 3D geovisualization. This suggests that immersive 3D visualization can effectively be used in higher education and that 3D CAVE settings enhance interactive learning between students.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of -5 to -17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
The phi(ev) is calculated from high-resolution discharge and precipitation data for several rain events with a cumulative precipitation P-cum ranging from less than 5mm to more than 80 mm. Because of the high uncertainty of phi(ev) associated with the hydrograph separation method, phi(ev) is calculated with several methods, including graphical methods, digital filters and a tracer-based method. The results indicate that the hydrological response depends on (theta) over bar (ini): during dry conditions phi(ev) is consistently below 0.1, even for events with high and intense precipitation. Above a threshold of (theta) over bar (ini) = 34 vol % phi(ev) can reach values up to 0.99 but there is a high scatter. Some variability can be explained with a weak correlation of phi(ev) with P-cum and rain intensity, but a considerable part of the variability remains unexplained. It is concluded that threshold-based methods can be helpful to prevent overestimation of the hydrological response during dry catchment conditions. The impact of soil moisture on the hydrological response during wet catchment conditions, however, is still insufficiently understood and cannot be generalized based on the present results.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger nonlinear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes. This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale. So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for waterrelated policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
The flash-flood in Braunsbach in the north-eastern part of Baden-Wuerttemberg/Germany was a particularly strong and concise event which took place during the floods in southern Germany at the end of May/early June 2016. This article presents a detailed analysis of the hydro-meteorological forcing and the hydrological consequences of this event. A specific approach, the "forensic hydrological analysis" was followed in order to include and combine retrospectively a variety of data from different disciplines. Such an approach investigates the origins, mechanisms and course of such natural events if possible in a "near real time" mode, in order to follow the most recent traces of the event. The results show that it was a very rare rainfall event with extreme intensities which, in combination with catchment properties, led to extreme runoff plus severe geomorphological hazards, i.e. great debris flows, which together resulted in immense damage in this small rural town Braunsbach. It was definitely a record-breaking event and greatly exceeded existing design guidelines for extreme flood discharge for this region, i.e. by a factor of about 10. Being such a rare or even unique event, it is not reliably feasible to put it into a crisp probabilistic context. However, one can conclude that a return period clearly above 100 years can be assigned for all event components: rainfall, peak discharge and sediment transport. Due to the complex and interacting processes, no single flood cause or reason for the very high damage can be identified, since only the interplay and the cascading characteristics of those led to such an event. The roles of different human activities on the origin and/or intensification of such an extreme event are finally discussed. (C) 2018 Elsevier B.V. All rights reserved.
This case study evaluates the suitability of radar-based quantitative precipitation estimates (QPEs) for the simulation of streamflow in the Marikina River Basin (MRB), the Philippines. Hourly radar-based QPEs were produced from reflectivity that had been observed by an S-band radar located about 90 km from the MRB. Radar data processing and precipitation estimation were carried out using the open source library wradlib. To assess the added value of the radar-based QPE, we used spatially interpolated rain gauge observations (gauge-only (GO) product) as a benchmark. Rain gauge observations were also used to quantify rainfall estimation errors at the point scale. At the point scale, the radar-based QPE outperformed the GO product in 2012, while for 2013, the performance was similar. For both periods, estimation errors substantially increased from daily to the hourly accumulation intervals. Despite this fact, both rainfall estimation methods allowed for a good representation of observed streamflow when used to force a hydrological simulation model of the MRB. Furthermore, the results of the hydrological simulation were consistent with rainfall verification at the point scale: the radar-based QPE performed better than the GO product in 2012, and equivalently in 2013. Altogether, we could demonstrate that, in terms of streamflow simulation, the radar-based QPE can perform as good as or even better than the GO product - even for a basin such as the MRB which has a comparatively dense rain gauge network. This suggests good prospects for using radar-based QPE to simulate and forecast streamflow in other parts of the Philippines where rain gauge networks are not as dense.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate
projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness-of-fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest-the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real-world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach.
Rainfall-induced attenuation is a major source of underestimation for radar-based precipitation estimation at C-band. Unconstrained gate-by-gate correction procedures are known to be inherently unstable and thus not suited for unsupervised attenuation correction. In this study, we evaluate three different procedures to constrain gate-by-gate attenuation correction using reflectivity as the only input. These procedures are benchmarked against rainfall estimates from uncorrected radar data, using six years of radar observations from the single-polarized C-band radar in South-West Germany. The precipitation estimation error is obtained by comparing the radar-based estimates to rain gauge observations. All attenuation correction procedures benchmarked in this study lead to an effective improvement of precipitation estimation. The first method caps the corrections if the rain intensity increase exceeds a factor of two. The second method decreases the parameters of the attenuation correction iteratively for every radar beam calculation until attaining a stability criterion. The second method outperforms the first method and leads to a consistent distribution of path-integrated attenuation along the radar beam. As a third method, we propose a slight modification of Kraemer's approach which allows users to exert better control over attenuation correction by introducing an additional constraint that prevents unplausible corrections in cases of dramatic signal losses.