Institut für Geographie und Geoökologie
Refine
Has Fulltext
- no (14)
Year of publication
- 2009 (14) (remove)
Document Type
- Article (14) (remove)
Is part of the Bibliography
- yes (14)
Institute
Separation of coarse organic particles from bulk surface soil samples by electrostatic attraction
(2009)
Different separation procedures are suggested for studying the stability and functionality of sod organic matter (OM). Density fractionation procedures using high-molarity, water-based salt solutions to separate organic particles may cause losses or transfers of C between particle and soluble OM fractions during separation, which may be a result of solution processes. The objective of this study was to separate coarse organic particles (>0.315 mm) from air- dried surface soil samples to avoid such solution processes as far as possible. Air-dried surface soil samples (<2 mm) from nine adjacent arable and forest sites were sieved into five soil particle size fractions (2-1.25, 1.25-0.8, 0.8- 0.5, 0.5-0.4, and 0.4-0.315 mm). Coarse organic particles were separated from each of these fractions using electrostatic attraction by a charged glass surface. The sum of the total dry matter content of the electrostatically separated coarse organic particles ranged from 0.05 to 140 g kg(-1). Scanning electron microscopy images and organic C (OC) analyses indicated, however, that the coarse organic particle fractions were also composed of 20 to 76% mineral particles (i.e., 200-760 g mineral kg(-1) fraction). The repeatability of the electrostatic attraction procedure falls within a range similar to that of accepted density fractionation methods using high-molarity salt solutions. Based on the similarity in repeatability, we suggest that the electrostatic attraction procedure will successfully remove coarse organic particles (>0.315 mm) from air-dried surface soil samples. Because aqueous solutions are not used, the electrostatic attraction procedure to separate coarse organic particles avoids C losses and transfers associated with solution-dependent techniques. Therefore, this method can be used as a pretreatment for subsequent density- or solubility-based soil OM fractionation procedures.
Content and binding forms of heavy metals, aluminium and phosphorus in bog iron ores from Poland
(2009)
Bog iron ores are widespread in Polish wetland soils used as meadows or pastures. They are suspected to contain high concentrations of heavy metals, which are precipitated together with Fe along a redox gradient. Therefore, soils with bog iron ore might be important sources for a heavy metal transfer from meadow plants into the food chain. However, this transfer depends on the different binding forms of heavy metals. The binding forms were quantified by sequential extraction analysis of heavy metals (Fe, Mn, Cr, Co, Ni, Cd, Pb) as well as Al and P on 13 representative samples of bog iron ores from central and southwestern Poland. Our results showed total contents of Cr, Co, Ni, Zn, Cd, and Pb not to exceed the natural values for sandy soils from Poland. Only the total Mn was slightly higher. The highest contents of all heavy metals have,been obtained in iron oxide fractions V (occluded in noncrystalline and poorly crystalline Fe oxides) and VI (occluded in crystalline Fe oxides). The results show a distinct relationship between the content of Fe and the quantity of Zn and Pb as well R Water soluble as well as plant available fractions were below the detection limit in most cases. From this we concluded bog iron ores not to be an actual, important source of heavy metals in the food chain. However, a remobilization of heavy metals might occur due to any reduction of iron oxides in bog iron ores, for example, by rising groundwater levels.
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions.
Fragmentation, deterioration, and loss of habitat patches threaten the survival of many insect species. Depending on their trophic level, species may be differently affected by these factors. However, studies investigating more than one trophic level on a landscape scale are still rare. In the present study we analyzed the effects of habitat size, isolation, and quality for the occurrence and population density of the endangered leaf beetle Cassida canaliculata Laich. (Coleoptera: Chrysomelidae) and its egg parasitoid, the hymenopteran wasp Foersterella reptans Nees (Hymenoptera: Tetracampidae). C. canaliculata is strictly monophagous on meadow sage (Salvia pratensis), while F. reptans can also parasitize other hosts. Both size and isolation of habitat patches strongly determined the occurrence of the beetle. However, population density increased to a much greater extent with increasing host plant density ( = habitat quality) than with habitat size. The occurrence probability of the egg parasitoid increased with increasing population density of C. canaliculata. In conclusion, although maintaining large, well-connected patches with high host plant density is surely the major conservation goal for the specialized herbivore C. canaliculata, also small patches with high host plant densities can support viable populations and should thus be conserved. The less specialized parasitoid F. reptans is more likely to be found on patches with high beetle density, while patch size and isolation seem to be less important.
The intention of the presented study is to gain a better understanding of the mechanisms that caused the bimodal rainfall-runoff responses which occurred up to the mid-1970s regularly in the Schafertal catchment and vanished after the onset of mining activities. Understanding, this process is a first step to understanding the ongoing hydrological change in this area. It is hypothesized that either subsurface stormflow, or fast displacement of groundwater, could cause the second delayed peak. A top-down analysis of rainfall-runoff data, field observations as well as process modelling are combined within a rejectionistic framework. A statistical analysis is used to test whether different predictors. which characterize the forcing. near surface water content and deeper subsurface store, allow the prediction of the type of rainfall-runoff response. Regression analysis is used with generalized linear models Lis they can deal with non-Gaussian error distributions Lis well its a non-stationary variance. The analysis reveals that the dominant predictors are the pre-event discharge (proxy of state of the groundwater store) and the precipitation amount, In the field campaign, the subsurface at a representative hillslope was investigated by means of electrical resistivity tomography in order to identify possible strata as flow paths for subsurface stormflow. A low resistivity in approximately 4 in depth-either due to a less permeable layer or the groundwater surface-was detected. The former Could serve as a flow path for subsurface stormflow. Finally, the physical-based hydrological model CATFLOW and the groundwater model FEFLOW are compared with respect to their ability to reproduce the bimodal runoff responses. The groundwater model is able to reproduce the observations, although it uses only an abstract representation of the hillslopes. Process model analysis as well Lis statistical analysis strongly suggest that fast displacement of groundwater is the dominant process underlying the bimodal runoff reactions.
Changes in soil hydraulic properties following ecosystem disturbances can become relevant for regional water cycles depending on the prevailing rainfall regime. In a tropical montane rainforest ecosystem in southern Ecuador, plot- scale investigations revealed that man-made disturbances were accompanied by a decrease in mean saturated hydraulic conductivity (Ks), whereas mean Ks of two different aged landslides was undistinguishable from the reference forest. Ks spatial structure weakened after disturbances in the topsoil. We used this spatial-temporal information combined with local rain intensities to assess the probability of impermeable soil layers under undisturbed, disturbed, and regenerating land-cover types. We furthermore compared the Ecuadorian man-made disturbance cycle with a similar land-use sequence in a tropical lowland rainforest region in Brazil. The studied montane rainforest is characterized by prevailing vertical flowpaths in the topsoil, whereas larger rainstorms in the study area potentially result in impermeable layers below 20 cm depth. In spite of the low frequency of such higher-intensity events, they transport a high portion of the annual runoff and may therefore significant for the regional water cycle. Hydrological flowpaths under two studied landslides are similar to the natural forest except for a somewhat higher probability of impermeable layer formation in the topsoil of the 2-year-old landslide. In contrast, human disturbances likely affect near-surface hydrology. Under a pasture and a young fallow, impermeable layers potentially develop already in the topsoil for larger rain events. A 10-year-old fallow indicates regeneration towards the original vertical flowpaths, though the land-use signal was still detectable. The consequences of land-cover change on near-surface hydrological behaviour are of similar magnitude in the tropical montane and the lowland rainforest region. This similarity can be explained by a more pronounced drop of soil permeability after pasture establishment in the montane rainforest region in spite of the prevailing much lower rain intensities.
The investigation of throughfall patterns has received considerable interest over the last decades. And yet, the geographical bias of pertinent previous studies and their methodologies and approaches to data analysis cast a doubt on the general validity of claims regarding spatial and temporal patterns of throughfall. We employed 220 collectors in a 1-ha plot of semideciduous tropical rain forest in Panama and sampled throughfall during a period of 14 months. Our analysis of spatial patterns is based on 60 data sets, whereas the temporal analysis comprises 91 events. Both data sets show skewed frequency distributions. When skewness arises from large outliers, the classical, nonrobust variogram estimator overestimates the sill variance and, in some cases, even induces spurious autocorrelation structures. In these situations, robust variogram estimation techniques offer a solution. Throughfall in our plot typically displayed no or only weak spatial autocorrelations. In contrast, temporal correlations were strong, that is, wet and dry locations persisted over consecutive wet seasons. Interestingly, seasonality and hence deciduousness had no influence on spatial and temporal patterns. We argue that if throughfall patterns are to have any explanatory power with respect to patterns of near-surface processes, data analytical artifacts must be ruled out lest spurious correlation be confounded with causality; furthermore, temporal stability over the domain of interest is essential.
Detention areas provide a means to lower peak discharges in rivers by temporarily storing excess water. In the case of extreme flood events, the storage effect reduces the risk of dike failures or extensive inundations for downstream reaches and near the site of abstraction. Due to the large amount of organic matter contained in the river water and the inundation of terrestrial vegetation in the detention area, a deterioration of water quality may occur. In particular, decay processes can cause a severe depletion of dissolved oxygen (DO) in the temporary water body. In this paper, we studied the potential of a water quality model to simulate the DO dynamics in a large but shallow detention area to be built at the Elbe River (Germany). Our focus was on examining the impact of spatial discretization on the model's performance and usability. Therefore, we used a zero-dimensional (OD) and a two-dimensional (2D) modeling approach in parallel. The two approaches solely differ in their spatial discretization, while conversion processes, parameters, and boundary conditions were kept identical. The dynamics of DO simulated by the two models are similar in the initial flooding period but diverge when the system starts to drain. The deviation can be attributed to the different spatial discretization of the two models, leading to different estimates of flow velocities and water depths. Only the 2D model can account for the impact of spatial variability on the evolution of state variables. However, its application requires high efforts for pre- and post-processing and significantly longer computation times. The 2D model is, therefore, not suitable for investigating various flood scenarios or for analyzing the impact of parameter uncertainty. For practical applications, we recommend to firstly set up a fast-running model of reduced spatial discretization, e.g. a OD model. Using this tool, the reliability of the simulation results should be checked by analyzing the parameter uncertainty of the water quality model. A particular focus may be on those parameters that are spatially variable and, therefore, believed to be better represented in a 2D approach. The benefit from the application of the more costly 2D model should be assessed, based on the analyses carried out with the OD model. A 2D model appears to be preferable only if the simulated detention area has a complex topography, flow velocities are highly variable in space, and the parameters of the water quality model are well known.