Refine
Year of publication
Document Type
- Postprint (22)
- Article (21)
- Doctoral Thesis (4)
- Master's Thesis (1)
Is part of the Bibliography
- yes (48)
Keywords
- model (48) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Physik und Astronomie (9)
- Institut für Geowissenschaften (7)
- Department Psychologie (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Institut für Biochemie und Biologie (3)
- Institut für Ernährungswissenschaft (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Informatik und Computational Science (2)
- Department Erziehungswissenschaft (1)
After a century of semi-restricted floodplain development, Southern Alberta, Canada, was struck by the devastating 2013 Flood. Aging infrastructure and limited property-level floodproofing likely contributed to the $4-6 billion (CAD) losses. Following this catastrophe, Alberta has seen a revival in flood management, largely focused on structural protections. However, concurrent with the recent structural work was a 100,000+ increase in Calgary's population in the 5 years following the flood, leading to further densification of high-hazard areas. This study implements the novel Stochastic Object-based Flood damage Dynamic Assessment (SOFDA) model framework to quantify the progression of the direct-damage flood risk in a mature urban neighborhood after the 2013 Flood. Five years of remote-sensing data, property assessment records, and inundation simulations following the flood are used to construct the model. Results show that in these 5 years, vulnerability trends (like densification) have increased flood risk by 4%; however, recent structural mitigation projects have reduced overall flood risk by 47% for this case study. These results demonstrate that the flood management revival in Southern Alberta has largely been successful at reducing flood risk; however, the gains are under threat from continued development and densification absent additional floodproofing regulations.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Strong hydroclimatic controls on vulnerability to subsurface nitrate contamination across Europe
(2020)
Subsurface contamination due to excessive nutrient surpluses is a persistent and widespread problem in agricultural areas across Europe. The vulnerability of a particular location to pollution from reactive solutes, such as nitrate, is determined by the interplay between hydrologic transport and biogeochemical transformations. Current studies on the controls of subsurface vulnerability do not consider the transient behaviour of transport dynamics in the root zone. Here, using state-of-the-art hydrologic simulations driven by observed hydroclimatic forcing, we demonstrate the strong spatiotemporal heterogeneity of hydrologic transport dynamics and reveal that these dynamics are primarily controlled by the hydroclimatic gradient of the aridity index across Europe. Contrasting the space-time dynamics of transport times with reactive timescales of denitrification in soil indicate that similar to 75% of the cultivated areas across Europe are potentially vulnerable to nitrate leaching for at least onethird of the year. We find that neglecting the transient nature of transport and reaction timescale results in a great underestimation of the extent of vulnerable regions by almost 50%. Therefore, future vulnerability and risk assessment studies must account for the transient behaviour of transport and biogeochemical transformation processes.
Strong hydroclimatic controls on vulnerability to subsurface nitrate contamination across Europe
(2020)
Subsurface contamination due to excessive nutrient surpluses is a persistent and widespread problem in agricultural areas across Europe. The vulnerability of a particular location to pollution from reactive solutes, such as nitrate, is determined by the interplay between hydrologic transport and biogeochemical transformations. Current studies on the controls of subsurface vulnerability do not consider the transient behaviour of transport dynamics in the root zone. Here, using state-of-the-art hydrologic simulations driven by observed hydroclimatic forcing, we demonstrate the strong spatiotemporal heterogeneity of hydrologic transport dynamics and reveal that these dynamics are primarily controlled by the hydroclimatic gradient of the aridity index across Europe. Contrasting the space-time dynamics of transport times with reactive timescales of denitrification in soil indicate that similar to 75% of the cultivated areas across Europe are potentially vulnerable to nitrate leaching for at least onethird of the year. We find that neglecting the transient nature of transport and reaction timescale results in a great underestimation of the extent of vulnerable regions by almost 50%. Therefore, future vulnerability and risk assessment studies must account for the transient behaviour of transport and biogeochemical transformation processes.
The pace-of-life syndrome (POLS) hypothesis posits that suites of traits are correlated along a slow-fast continuum owing to life history trade-offs. Despite widespread adoption, environmental conditions driving the emergence of POLS remain unclear. A recently proposed conceptual framework of POLS suggests that a slow-fast continuum should align to fluctuations in density-dependent selection. We tested three key predictions made by this framework with an ecoevolutionary agent-based population model. Selection acted on responsiveness (behavioral trait) to interpatch resource differences and the reproductive investment threshold (life history trait). Across environments with density fluctuations of different magnitudes, we observed the emergence of a common axis of trait covariation between and within populations (i.e., the evolution of a POLS). Slow-type (fast-type) populations with high (low) responsiveness and low (high) reproductive investment threshold were selected at high (low) population densities and less (more) intense and frequent density fluctuations. In support of the predictions, fast-type populations contained a higher degree of variation in traits and were associated with higher intrinsic reproductive rate (r(0)) and higher sensitivity to intraspecific competition (gamma), pointing to a universal trade-off. While our findings support that POLS aligns with density-dependent selection, we discuss possible mechanisms that may lead to alternative evolutionary pathways.
Xanthomonas phaseoli pv. manihotis (Xpm) is the causal agent of cassava bacterial blight, the most important bacterial disease in this crop. There is a paucity of knowledge about the metabolism of Xanthomonas and its relevance in the pathogenic process, with the exception of the elucidation of the xanthan biosynthesis route. Here we report the reconstruction of the genome-scale model of Xpm metabolism and the insights it provides into plant-pathogen interactions. The model, iXpm1556, displayed 1,556 reactions, 1,527 compounds, and 890 genes. Metabolic maps of central amino acid and carbohydrate metabolism, as well as xanthan biosynthesis of Xpm, were reconstructed using Escher (https://escher.github.io/) to guide the curation process and for further analyses. The model was constrained using the RNA-seq data of a mutant of Xpm for quorum sensing (QS), and these data were used to construct context-specific models (CSMs) of the metabolism of the two strains (wild type and QS mutant). The CSMs and flux balance analysis were used to get insights into pathogenicity, xanthan biosynthesis, and QS mechanisms. Between the CSMs, 653 reactions were shared; unique reactions belong to purine, pyrimidine, and amino acid metabolism. Alternative objective functions were used to demonstrate a trade-off between xanthan biosynthesis and growth and the re-allocation of resources in the process of biosynthesis. Important features altered by QS included carbohydrate metabolism, NAD(P)(+) balance, and fatty acid elongation. In this work, we modeled the xanthan biosynthesis and the QS process and their impact on the metabolism of the bacterium. This model will be useful for researchers studying host-pathogen interactions and will provide insights into the mechanisms of infection used by this and other Xanthomonas species.
The current awareness of the high importance of urban green leads to a stronger need for tools to comprehensively represent urban green and its benefits. A common scientific approach is the development of urban ecosystem services (UES) based on remote sensing methods at the city or district level. Urban planning, however, requires fine-grained data that match local management practices. Hence, this study linked local biotope and tree mapping methods to the concept of ecosystem services. The methodology was tested in an inner-city district in SW Germany, comparing publicly accessible areas and non-accessible courtyards. The results provide area-specific [m(2)] information on the green inventory at the microscale, whereas derived stock and UES indicators form the basis for comparative analyses regarding climate adaptation and biodiversity. In the case study, there are ten times more micro-scale green spaces in private courtyards than in the public space, as well as twice as many trees. The approach transfers a scientific concept into municipal planning practice, enables the quantitative assessment of urban green at the microscale and illustrates the importance for green stock data in private areas to enhance decision support in urban development. Different aspects concerning data collection and data availability are critically discussed.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.
In this study, we present an empirical model of the equatorial electron pitch angle distributions (PADs) in the outer radiation belt based on the full data set collected by the Magnetic Electron Ion Spectrometer (MagEIS) instrument onboard the Van Allen Probes in 2012-2019. The PADs are fitted with a combination of the first, third and fifth sine harmonics. The resulting equation resolves all PAD types found in the outer radiation belt (pancake, flat-top, butterfly and cap PADs) and can be analytically integrated to derive omnidirectional flux. We introduce a two-step modeling procedure that for the first time ensures a continuous dependence on L, magnetic local time and activity, parametrized by the solar wind dynamic pressure. We propose two methods to reconstruct equatorial electron flux using the model. The first approach requires two uni-directional flux observations and is applicable to low-PA data. The second method can be used to reconstruct the full equatorial PADs from a single uni- or omnidirectional measurement at off-equatorial latitudes. The model can be used for converting the long-term data sets of electron fluxes to phase space density in terms of adiabatic invariants, for physics-based modeling in the form of boundary conditions, and for data assimilation purposes.
Neodymium isotopic composition (epsilon Nd) has enjoyed widespread use as a palaeotracer, principally because it behaves quasi-conservatively in the modern ocean. However, recent bottom water epsilon Nd reconstructions from the eastern North Atlantic are difficult to interpret under assumptions of conservative behaviour. The observation that this apparent departure from conservative behaviour increases with enhanced ice-rafted debris (IRD) fluxes has resulted in the suggestion that IRD leads to the overprinting of bottom water epsilon Nd through reversible scavenging. In this study, a simple water column model successfully reproduces epsilon Nd reconstructions from the eastern North Atlantic at the Last Glacial Maximum and Heinrich Stadial 1, and demonstrates that the changes in scavenging intensity required for good model-data fit is in good agreement with changes in the observed IRD flux. Although uncertainties in model parameters preclude a more definitive conclusion, the results indicate that the suggestion of IRD as a source of non-conservative behaviour in the epsilon Nd tracer is reasonable and that further research into the fundamental chemistry underlying the marine neodymium cycle is necessary to increase confidence in assumptions of conservative epsilon Nd behaviour in the past.
We study populations of globally coupled noisy rotators (oscillators with inertia) allowing a nonequilibrium transition from a desynchronized state to a synchronous one (with the nonvanishing order parameter). The newly developed analytical approaches resulted in solutions describing the synchronous state with constant order parameter for weakly inertial rotators, including the case of zero inertia, when the model is reduced to the Kuramoto model of coupled noise oscillators. These approaches provide also analytical criteria distinguishing supercritical and subcritical transitions to the desynchronized state and indicate the universality of such transitions in rotator ensembles. All the obtained analytical results are confirmed by the numerical ones, both by direct simulations of the large ensembles and by solution of the associated Fokker-Planck equation. We also propose generalizations of the developed approaches for setups where different rotators parameters (natural frequencies, masses, noise intensities, strengths and phase shifts in coupling) are dispersed.
Paradoxical leadership behaviour (PLB) represents an emerging leadership construct that can help leaders deal with conflicting demands. In this paper, we report three studies that add to this nascent literature theoretically, methodologically, and empirically. In Study 1, we validate an effective short-form measure of global PLB using three different samples. In Studies 2 and 3, we draw on the job demands-resources model to propose that paradoxical leaders promote followers' work engagement by simultaneously fostering follower goal clarity and work autonomy. The results of survey data from Studies 2 and 3 largely confirm our model. Specifically, our findings show that PLB is positively associated with follower goal clarity and work autonomy, and that PLB exerts an indirect effect on work engagement via these variables. Moreover, our results support a hypothesized interaction effect of goal clarity and work autonomy to predict followers' work engagement, as well as a conditional indirect effect of PLB on work engagement via the interactive effect. We discuss the practical implications for leaders and organizations.
Practitioner points
To effectively engage followers in their work, leaders should create work environments in which followers know exactly what to do (i.e., have high goal clarity), but at the same time can determine on their own how to do their work (i.e., have high work autonomy)
To foster both goal clarity and work autonomy, leaders should combine communal (e.g., other-centred, flexibility-providing) and agentic aspects of leadership (e.g., maintaining decision control and enforcing performance standards).
HR departments should design leadership trainings that help leaders to combine seemingly opposing, yet ultimately synergistic behaviours.
There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.
Nonstationary coherence-incoherence patterns in nonlocally coupled heterogeneous phase oscillators
(2020)
We consider a large ring of nonlocally coupled phase oscillators and show that apart from stationary chimera states, this system also supports nonstationary coherence-incoherence patterns (CIPs). For identical oscillators, these CIPs behave as breathing chimera states and are found in a relatively small parameter region only. It turns out that the stability region of these states enlarges dramatically if a certain amount of spatially uniform heterogeneity (e.g., Lorentzian distribution of natural frequencies) is introduced in the system. In this case, nonstationary CIPs can be studied as stable quasiperiodic solutions of a corresponding mean-field equation, formally describing the infinite system limit. Carrying out direct numerical simulations of the mean-field equation, we find different types of nonstationary CIPs with pulsing and/or alternating chimera-like behavior. Moreover, we reveal a complex bifurcation scenario underlying the transformation of these CIPs into each other. These theoretical predictions are confirmed by numerical simulations of the original coupled oscillator system.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Global heat adaptation among urban populations and its evolution under different climate futures
(2022)
Heat and increasing ambient temperatures under climate change represent a serious threat to human health in cities. Heat exposure has been studied extensively at a global scale. Studies comparing a defined temperature threshold with the future daytime temperature during a certain period of time, had concluded an increase in threat to human health. Such findings however do not explicitly account for possible changes in future human heat adaptation and might even overestimate heat exposure. Thus, heat adaptation and its development is still unclear. Human heat adaptation refers to the local temperature to which populations are adjusted to. It can be inferred from the lowest point of the U- or V-shaped heat-mortality relationship (HMR), the Minimum Mortality Temperature (MMT). While epidemiological studies inform on the MMT at the city scale for case studies, a general model applicable at the global scale to infer on temporal change in MMTs had not yet been realised. The conventional approach depends on data availability, their robustness, and on the access to daily mortality records at the city scale. Thorough analysis however must account for future changes in the MMT as heat adaptation happens partially passively. Human heat adaptation consists of two aspects: (1) the intensity of the heat hazard that is still tolerated by human populations, meaning the heat burden they can bear and (2) the wealth-induced technological, social and behavioural measures that can be employed to avoid heat exposure. The objective of this thesis is to investigate and quantify human heat adaptation among urban populations at a global scale under the current climate and to project future adaptation under climate change until the end of the century. To date, this has not yet been accomplished. The evaluation of global heat adaptation among urban populations and its evolution under climate change comprises three levels of analysis. First, using the example of Germany, the MMT is calculated at the city level by applying the conventional method. Second, this thesis compiles a data pool of 400 urban MMTs to develop and train a new model capable of estimating MMTs on the basis of physical and socio-economic city characteristics using multivariate non-linear multivariate regression. The MMT is successfully described as a function of the current climate, the topography and the socio-economic standard, independently of daily mortality data for cities around the world. The city-specific MMT estimates represents a measure of human heat adaptation among the urban population. In a final third analysis, the model to derive human heat adaptation was adjusted to be driven by projected climate and socio-economic variables for the future. This allowed for estimation of the MMT and its change for 3 820 cities worldwide for different combinations of climate trajectories and socio-economic pathways until 2100. The knowledge on the evolution of heat adaptation in the future is a novelty as mostly heat exposure and its future development had been researched. In this work, changes in heat adaptation and exposure were analysed jointly. A wide range of possible health-related outcomes up to 2100 was the result, of which two scenarios with the highest socio-economic developments but opposing strong warming levels were highlighted for comparison. Strong economic growth based upon fossil fuel exploitation is associated with a high gain in heat adaptation, but may not be able to compensate for the associated negative health effects due to increased heat exposure in 30% to 40% of the cities investigated caused by severe climate change. A slightly less strong, but sustainable growth brings moderate gains in heat adaptation but a lower heat exposure and exposure reductions in 80% to 84% of the cities in terms of frequency (number of days exceeding the MMT) and intensity (magnitude of the MMT exceedance) due to a milder global warming. Choosing a 2 ° C compatible development by 2100 would therefore lower the risk of heat-related mortality at the end of the century. In summary, this thesis makes diverse and multidisciplinary contributions to a deeper understanding of human adaptation to heat under the current and the future climate. It is one of the first studies to carry out a systematic and statistical analysis of urban characteristics which are useful as MMT drivers to establish a generalised model of human heat adaptation, applicable at the global level. A broad range of possible heat-related health options for various future scenarios was shown for the first time. This work is of relevance for the assessment of heat-health impacts in regions where mortality data are not accessible or missing. The results are useful for health care planning at the meso- and macro-level and to urban- and climate change adaptation planning. Lastly, beyond having met the posed objective, this thesis advances research towards a global future impact assessment of heat on human health by providing an alternative method of MMT estimation, that is spatially and temporally flexible in its application.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.