Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols.
On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied.
The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics.
We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
The concept of similitude is commonly employed in the fields of fluid dynamics and engineering but rarely used in cryospheric research. Here we apply this method to the problem of ice flow to examine the dynamic similitude of isothermal ice sheets in shallow-shelf approximation against the scaling of their geometry and physical parameters. Carrying out a dimensional analysis of the stress balance we obtain dimensionless numbers that characterize the flow. Requiring that these numbers remain the same under scaling we obtain conditions that relate the geometric scaling factors, the parameters for the ice softness, surface mass balance and basal friction as well as the ice-sheet intrinsic response time to each other. We demonstrate that these scaling laws are the same for both the (two-dimensional) flow-line case and the three-dimensional case. The theoretically predicted ice-sheet scaling behavior agrees with results from numerical simulations that we conduct in flow-line and three-dimensional conceptual setups. We further investigate analytically the implications of geometric scaling of ice sheets for their response time. With this study we provide a framework which, under several assumptions, allows for a fundamental comparison of the ice-dynamic behavior across different scales. It proves to be useful in the design of conceptual numerical model setups and could also be helpful for designing laboratory glacier experiments. The concept might also be applied to real-world systems, e.g., to examine the response times of glaciers, ice streams or ice sheets to climatic perturbations.
Magic screens
(2016)
Garcilaso de la Vega el Inca, for several centuries doubtlessly the most discussed and most eminent writer of Andean America in the 16th and 17th centuries, throughout his life set the utmost value on the fact that he descended matrilineally from Atahualpa Yupanqui and from the last Inca emperor, Huayna Capac. Thus, both in his person and in his creative work he combined different cultural worlds in a polylogical way. (1) Two painters boasted that very same Inca descent - they were the last two great masters of the Cuzco school of painting, which over several generations of artists had been an institution of excellent renown and prestige, and whose economic downfall and artistic marginalization was vividly described by the French traveller Paul Mancoy in 1837.(2) While, during the 18th century, Cuzco school paintings were still much cherished and sought after, by the beginning of the following century the elite of Lima regarded them as behind the times and provincial, committed to an 'indigenous' painting style. The artists from up-country - such was the reproach - could not keep up with the modern forms of seeing and creating, as exemplified by European paragons. Yet, just how 'provincial', truly, was this art?
In contrast to recent advances in projecting sea levels, estimations about the economic impact of sea level rise are vague. Nonetheless, they are of great importance for policy making with regard to adaptation and greenhouse-gas mitigation. Since the damage is mainly caused by extreme events, we propose a stochastic framework to estimate the monetary losses from coastal floods in a confined region. For this purpose, we follow a Peak-over-Threshold approach employing a Poisson point process and the Generalised Pareto Distribution. By considering the effect of sea level rise as well as potential adaptation scenarios on the involved parameters, we are able to study the development of the annual damage. An application to the city of Copenhagen shows that a doubling of losses can be expected from a mean sea level increase of only 11 cm. In general, we find that for varying parameters the expected losses can be well approximated by one of three analytical expressions depending on the extreme value parameters. These findings reveal the complex interplay of the involved parameters and allow conclusions of fundamental relevance. For instance, we show that the damage typically increases faster than the sea level rise itself. This in turn can be of great importance for the assessment of sea level rise impacts on the global scale. Our results are accompanied by an assessment of uncertainty, which reflects the stochastic nature of extreme events. While the absolute value of uncertainty about the flood damage increases with rising mean sea levels, we find that it decreases in relation to the expected damage.
Background
Vitamin-D-binding protein (VDBP) is a low molecular weight protein that is filtered through the glomerulus as a 25-(OH) vitamin D 3/VDBP complex. In the normal kidney VDBP is reabsorbed and catabolized by proximal tubule epithelial cells reducing the urinary excretion to trace amounts. Acute tubular injury is expected to result in urinary VDBP loss. The purpose of our study was to explore the potential role of urinary VDBP as a biomarker of an acute renal damage.
Method
We included 314 patients with diabetes mellitus or mild renal impairment undergoing coronary angiography and collected blood and urine before and 24 hours after the CM application. Patients were followed for 90 days for the composite endpoint major adverse renal events (MARE: need for dialysis, doubling of serum creatinine after 90 days, unplanned emergency rehospitalization or death).
Results
Increased urine VDBP concentration 24 hours after contrast media exposure was predictive for dialysis need (no dialysis: 113.06 +/- 299.61ng/ml, n = 303; need for dialysis: 613.07 +/- 700.45 ng/ml, n = 11, Mean +/- SD, p < 0.001), death (no death during follow-up: 121.41 +/- 324.45 ng/ml, n = 306; death during follow-up: 522.01 +/- 521.86 ng/ml, n = 8; Mean +/- SD, p < 0.003) and MARE (no MARE: 112.08 +/- 302.00ng/ml, n = 298; MARE: 506.16 +/- 624.61 ng/ml, n = 16, Mean +/- SD, p < 0.001) during the follow-up of 90 days after contrast media exposure. Correction of urine VDBP concentrations for creatinine excretion confirmed its predictive value and was consistent with increased levels of urinary Kidney Injury Molecule1 (KIM-1) and baseline plasma creatinine in patients with above mentioned complications. The impact of urinary VDBP and KIM-1 on MARE was independent of known CIN risk factors such as anemia, preexisting renal failure, preexisting heart failure, and diabetes.
Conclusions
Urinary VDBP is a promising novel biomarker of major contrast induced nephropathy-associated events 90 days after contrast media exposure.
The knowledge of the contemporary in situ stress state is a key issue for safe and sustainable subsurface engineering. However, information on the orientation and magnitudes of the stress state is limited and often not available for the areas of interest. Therefore 3-D geomechanical-numerical modelling is used to estimate the in situ stress state and the distance of faults from failure for application in subsurface engineering. The main challenge in this approach is to bridge the gap in scale between the widely scattered data used for calibration of the model and the high resolution in the target area required for the application. We present a multi-stage 3-D geomechanical-numerical approach which provides a state-of-the-art model of the stress field for a reservoir-scale area from widely scattered data records. Therefore, we first use a large-scale regional model which is calibrated by available stress data and provides the full 3-D stress tensor at discrete points in the entire model volume. The modelled stress state is used subsequently for the calibration of a smaller-scale model located within the large-scale model in an area without any observed stress data records. We exemplify this approach with two-stages for the area around Munich in the German Molasse Basin. As an example of application, we estimate the scalar values for slip tendency and fracture potential from the model results as measures for the criticality of fault reactivation in the reservoir-scale model. The modelling results show that variations due to uncertainties in the input data are mainly introduced by the uncertain material properties and missing S-Hmax magnitude estimates needed for a more reliable model calibration. This leads to the conclusion that at this stage the model's reliability depends only on the amount and quality of available stress information rather than on the modelling technique itself or on local details of the model geometry. Any improvements in modelling and increases in model reliability can only be achieved using more high-quality data for calibration.
Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
Linking together the processes of rapid physical erosion and the resultant chemical dissolution of rock is a crucial step in building an overall deterministic understanding of weathering in mountain belts. Landslides, which are the most volumetrically important geomorphic process at these high rates of erosion, can generate extremely high rates of very localised weathering. To elucidate how this process works we have taken advantage of uniquely intense landsliding, resulting from Typhoon Morakot, in the T'aimali River and surrounds in southern Taiwan. Combining detailed analysis of landslide seepage chemistry with estimates of catchment-by-catchment landslide volumes, we demonstrate that in this setting the primary role of landslides is to introduce fresh, highly labile mineral phases into the surface weathering environment. There, rapid weathering is driven by the oxidation of pyrite and the resultant sulfuric-acid-driven dissolution of primarily carbonate rock. The total dissolved load correlates well with dissolved sulfate - the chief product of this style of weathering - in both landslides and streams draining the area (R-2 = 0.841 and 0.929 respectively; p < 0.001 in both cases), with solute chemistry in seepage from landslides and catchments affected by significant landsliding governed by the same weathering reactions. The predominance of coupled carbonate-sulfuric-acid-driven weathering is the key difference between these sites and previously studied landslides in New Zealand (Emberson et al., 2016), but in both settings increasing volumes of landslides drive greater overall solute concentrations in streams.
Bedrock landslides, by excavating deep below saprolite-rock interfaces, create conditions for weathering in which all mineral phases in a lithology are initially unweathered within landslide deposits. As a result, the most labile phases dominate the weathering immediately after mobilisation and during a transient period of depletion. This mode of dissolution can strongly alter the overall output of solutes from catchments and their contribution to global chemical cycles if landslide-derived material is retained in catchments for extended periods after mass wasting.
The eukaryotic-specific Isd11 is a complex- orphan protein with ability to bind the prokaryotic IscS
(2016)
The eukaryotic protein Isd11 is a chaperone that binds and stabilizes the central component of the essential metabolic pathway responsible for formation of iron-sulfur clusters in mitochondria, the desulfurase Nfs1. Little is known about the exact role of Isd11. Here, we show that human Isd11 (ISD11) is a helical protein which exists in solution as an equilibrium between monomer, dimeric and tetrameric species when in the absence of human Nfs1 (NFS1). We also show that, surprisingly, recombinant ISD11 expressed in E. coli co-purifies with the bacterial orthologue of NFS1, IscS. Binding is weak but specific suggesting that, despite the absence of Isd11 sequences in bacteria, there is enough conservation between the two desulfurases to retain a similar mode of interaction. This knowledge may inform us on the conservation of the mode of binding of Isd11 to the desulfurase. We used evolutionary evidence to suggest Isd11 residues involved in the interaction.
Strategic sexual signals
(2016)
The color red has special meaning in mating-relevant contexts. Wearing red can enhance perceptions of women's attractiveness and desirability as a potential romantic partner. Building on recent findings, the present study examined whether women's (N = 74) choice to display the color red is influenced by the attractiveness of an expected opposite-sex interaction partner. Results indicated that female participants who expected to interact with an attractive man displayed red (on clothing, accessories, and/or makeup) more often than a baseline consisting of women in a natural environment with no induced expectation. In contrast, when women expected to interact with an unattractive man, they eschewed red, displaying it less often than in the baseline condition. Findings are discussed with respect to evolutionary and cultural perspectives on mate evaluation and selection.
The new sediment record from the deep Dead Sea basin (ICDP core 5017-1) provides a unique archive for hydroclimatic variability in the Levant. Here, we present high-resolution sediment facies analysis and elemental composition by micro-X-ray fluorescence (mu XRF) scanning of core 5017-1 to trace lake levels and responses of the regional hydroclimatology during the time interval from ca. 117 to 75 ka, i. e. the transition between the last interglacial and the onset of the last glaciation. We distinguished six major micro-facies types and interpreted these and their alterations in the core in terms of relative lake level changes. The two end-member facies for highest and lowest lake levels are (a) up to several metres thick, greenish sediments of alternating aragonite and detrital marl laminae (aad) and (b) thick halite facies, respectively. Intermediate lake levels are characterised by detrital marls with varying amounts of aragonite, gypsum or halite, reflecting lower-amplitude, shorter-term variability. Two intervals of pronounced lake level drops occurred at similar to 110-108 +/- 5 and similar to 93-87 +/- 7 ka. They likely coincide with stadial conditions in the central Mediterranean (Melisey I and II pollen zones in Monticchio) and low global sea levels during Marine Isotope Stage (MIS) 5d and 5b. However, our data do not support the current hypothesis of an almost complete desiccation of the Dead Sea during the earlier of these lake level low stands based on a recovered gravel layer. Based on new petrographic analyses, we propose that, although it was a low stand, this well-sorted gravel layer may be a vestige of a thick turbidite that has been washed out during drilling rather than an in situ beach deposit. Two intervals of higher lake stands at similar to 108-93 +/- 6 and similar to 87-75 +/- 7 ka correspond to interstadial conditions in the central Mediterranean, i. e. pollen zones St. Germain I and II in Monticchio, and Greenland interstadials (GI) 24+23 and 21 in Greenland, as well as to sapropels S4 and S3 in the Mediterranean Sea. These apparent correlations suggest a close link of the climate in the Levant to North Atlantic and Mediterranean climates during the time of the build-up of Northern Hemisphere ice shields in the early last glacial period.
Injection of fluids into deep saline aquifers causes a pore pressure increase in the storage formation, and thus displacement of resident brine. Via hydraulically conductive faults, brine may migrate upwards into shallower aquifers and lead to unwanted salinisation of potable groundwater resources. In the present study, we investigated different scenarios for a potential storage site in the Northeast German Basin using a three-dimensional (3-D) regional-scale model that includes four major fault zones. The focus was on assessing the impact of fault length and the effect of a secondary reservoir above the storage formation, as well as model boundary conditions and initial salinity distribution on the potential salinisation of shallow groundwater resources. We employed numerical simulations of brine injection as a representative fluid.
Our simulation results demonstrate that the lateral model boundary settings and the effective fault damage zone volume have the greatest influence on pressure build-up and development within the reservoir, and thus intensity and duration of fluid flow through the faults. Higher vertical pressure gradients for short fault segments or a small effective fault damage zone volume result in the highest salinisation potential due to a larger vertical fault height affected by fluid displacement. Consequently, it has a strong impact on the degree of shallow aquifer salinisation, whether a gradient in salinity exists or the saltwater-freshwater interface lies below the fluid displacement depth in the faults. A small effective fault damage zone volume or low fault permeability further extend the duration of fluid flow, which can persist for several tens to hundreds of years, if the reservoir is laterally confined. Laterally open reservoir boundaries, large effective fault damage zone volumes and intermediate reservoirs significantly reduce vertical brine migration and the potential of freshwater salinisation because the origin depth of displaced brine is located only a few decametres below the shallow aquifer in maximum.
The present study demonstrates that the existence of hydraulically conductive faults is not necessarily an exclusion criterion for potential injection sites, because salinisation of shallower aquifers strongly depends on initial salinity distribution, location of hydraulically conductive faults and their effective damage zone volumes as well as geological boundary conditions.
Individuals within populations often differ substantially in habitat use, the ecological consequences of which can be far reaching. Stable isotope analysis provides a convenient and often cost effective means of indirectly assessing the habitat use of individuals that can yield valuable insights into the spatiotemporal distribution of foraging specialisations within a population. Here we use the stable isotope ratios of southern sea lion (Otaria flavescens) pup vibrissae at the Falkland Islands, in the South Atlantic, as a proxy for adult female habitat use during gestation. A previous study found that adult females from one breeding colony (Big Shag Island) foraged in two discrete habitats, inshore (coastal) or offshore (outer Patagonian Shelf). However, as this species breeds at over 70 sites around the Falkland Islands, it is unclear if this pattern is representative of the Falkland Islands as a whole. In order to characterize habitat use, we therefore assayed carbon (delta C-13) and nitrogen (delta N-15) ratios from 65 southern sea lion pup vibrissae, sampled across 19 breeding colonies at the Falkland Islands. Model-based clustering of pup isotope ratios identified three distinct clusters, representing adult females that foraged inshore, offshore, and a cluster best described as intermediate. A significant difference was found in the use of inshore and offshore habitats between West and East Falkland and between the two colonies with the largest sample sizes, both of which are located in East Falkland. However, habitat use was unrelated to the proximity of breeding colonies to the Patagonian Shelf, a region associated with enhanced biological productivity. Our study thus points towards other factors, such as local oceanography and its influence on resource distribution, playing a prominent role in inshore and offshore habitat use.
About a quarter of anthropogenic CO2 emissions are currently taken up by the oceans, decreasing seawater pH. We performed a mesocosm experiment in the Baltic Sea in order to investigate the consequences of increasing CO2 levels on pelagic carbon fluxes. A gradient of different CO2 scenarios, ranging from ambient (similar to 370 mu atm) to high (similar to 1200 mu atm), were set up in mesocosm bags (similar to 55m(3)). We determined standing stocks and temporal changes of total particulate carbon (TPC), dissolved organic carbon (DOC), dissolved inorganic carbon (DIC), and particulate organic carbon (POC) of specific plankton groups. We also measured carbon flux via CO2 exchange with the atmosphere and sedimentation (export), and biological rate measurements of primary production, bacterial production, and total respiration. The experiment lasted for 44 days and was divided into three different phases (I: t0-t16; II: t17-t30; III: t31-t43). Pools of TPC, DOC, and DIC were approximately 420, 7200, and 25 200 mmol Cm-2 at the start of the experiment, and the initial CO2 additions increased the DIC pool by similar to 7% in the highest CO2 treatment. Overall, there was a decrease in TPC and increase of DOC over the course of the experiment. The decrease in TPC was lower, and increase in DOC higher, in treatments with added CO2. During phase I the estimated gross primary production (GPP) was similar to 100 mmol C m(-2) day(-1), from which 75-95% was respired, similar to 1% ended up in the TPC (including export), and 5-25% was added to the DOC pool. During phase II, the respiration loss increased to similar to 100% of GPP at the ambient CO2 concentration, whereas respiration was lower (85-95% of GPP) in the highest CO2 treatment. Bacterial production was similar to 30% lower, on average, at the highest CO2 concentration than in the controls during phases II and III. This resulted in a higher accumulation of DOC and lower reduction in the TPC pool in the elevated CO2 treatments at the end of phase II extending throughout phase III. The "extra" organic carbon at high CO2 remained fixed in an increasing biomass of small-sized plankton and in the DOC pool, and did not transfer into large, sinking aggregates. Our results revealed a clear effect of increasing CO2 on the carbon budget and mineralization, in particular under nutrient limited conditions. Lower carbon loss processes (respiration and bacterial remineralization) at elevated CO2 levels resulted in higher TPC and DOC pools than ambient CO2 concentration. These results highlight the importance of addressing not only net changes in carbon standing stocks but also carbon fluxes and budgets to better disentangle the effects of ocean acidification.
Molecular paleoclimate reconstructions over the last 9 ka from a peat sequence in South China
(2016)
To achieve a better understanding of Holocene climate change in the monsoon regions of China, we investigated the molecular distributions and carbon and hydrogen isotope compositions delta C-13 and delta D values) of long-chain n-alkanes in a peat core from the Shiwangutian SWGT) peatland, south China over the last 9 ka. By comparisons with other climate records, we found that the delta C-13 values of the long-chain n-alkanes can be a proxy for humidity, while the dD values of the long-chain n-alkanes primarily recorded the moisture source dD signal during 9-1.8 ka BP and responded to the dry climate during 1.8-0.3 ka BP. Together with the average chain length ACL) and the carbon preference index CPI) data, the climate evolution over last 9 ka in the SWGT peatland can be divided into three stages. During the first stage 9-5 ka BP), the delta C-13 values were depleted and CPI and Paq values were low, while ACL values were high. They reveal a period of warm and wet climate, which is regarded as the Holocene optimum. The second stage 5-1.8 ka BP) witnessed a shift to relatively cool and dry climate, as indicated by the more positive delta C-13 values and lower ACL values. During the third stage 1.8-0.3 ka BP), the delta C-13, delta D, CPI and Paq values showed marked increase and ACL values varied greatly, implying an abrupt change to cold and dry conditions. This climate pattern corresponds to the broad decline in Asian monsoon intensity through the latter part of the Holocene. Our results do not support a later Holocene optimum in south China as suggested by previous studies.
Background
Dietary calcium (Ca) concentrations might affect regulatory pathways within the Ca and vitamin D metabolism and consequently excretory mechanisms. Considering large variations in Ca concentrations of feline diets, the physiological impact on Ca homeostasis has not been evaluated to date. In the present study, diets with increasing concentrations of dicalcium phosphate were offered to ten healthy adult cats (Ca/phosphorus (P): 6.23/6.02, 7.77/7.56, 15.0/12.7, 19.0/17.3, 22.2/19.9, 24.3/21.6 g/kg dry matter). Each feeding period was divided into a 10-day adaptation and an 8-day sampling period in order to collect urine and faeces. On the last day of each feeding period, blood samples were taken.
Results
Urinary Ca concentrations remained unaffected, but faecal Ca concentrations increased (P < 0.001) with increasing dietary Ca levels. No effect on whole and intact parathyroid hormone levels, fibroblast growth factor 23 and calcitriol concentrations in the blood of the cats were observed. However, the calcitriol precursors 25(OH)D-2 and 25(OH)D-3, which are considered the most useful indicators for the vitamin D status, decreased with higher dietary Ca levels (P = 0.013 and P = 0.033). Increasing dietary levels of dicalcium phosphate revealed an acidifying effect on urinary fasting pH (6.02) and postprandial pH (6.01) (P < 0.001), possibly mediated by an increase of urinary phosphorus (P) concentrations (P < 0.001).
Conclusions
In conclusion, calcitriol precursors were linearly affected by increasing dietary Ca concentrations. The increase in faecal Ca excretion indicates that Ca homeostasis of cats is mainly regulated in the intestine and not by the kidneys. Long-term studies should investigate the physiological relevance of the acidifying effect observed when feeding diets high in Ca and P.
Neisseria gonorrhoeae is one of the most prevalent sexually transmitted diseases worldwide with more than 100 million new infections per year. A lack of intense research over the last decades and increasing resistances to the recommended antibiotics call for a better understanding of gonococcal infection, fast diagnostics and therapeutic measures against N. gonorrhoeae. Therefore, the aim of this work was to identify novel immunogenic proteins as a first step to advance those unresolved problems. For the identification of immunogenic proteins, pHORF oligopeptide phage display libraries of the entire N. gonorrhoeae genome were constructed. Several immunogenic oligopeptides were identified using polyclonal rabbit antibodies against N. gonorrhoeae. Corresponding full-length proteins of the identified oligopeptides were expressed and their immunogenic character was verified by ELISA. The immunogenic character of six proteins was identified for the first time. Additional 13 proteins were verified as immunogenic proteins in N. gonorrhoeae.
The aim of the present study was to test the functional relevance of the spatial concepts UP or DOWN for words that use these concepts either literally (space) or metaphorically (time, valence). A functional relevance would imply a symmetrical relationship between the spatial concepts and words related to these concepts, showing that processing words activate the related spatial concepts on one hand, but also that an activation of the concepts will ease the retrieval of a related word on the other. For the latter, the rotation angle of participant's body position was manipulated either to an upright or a head-down tilted body position to activate the related spatial concept. Afterwards participants produced in a within-subject design previously memorized words of the concepts space, time and valence according to the pace of a metronome. All words were related either to the spatial concept UP or DOWN. The results including Bayesian analyses show (1) a significant interaction between body position and words using the concepts UP and DOWN literally, (2) a marginal significant interaction between body position and temporal words and (3) no effect between body position and valence words. However, post-hoc analyses suggest no difference between experiments. Thus, the authors concluded that integrating sensorimotor experiences is indeed of functional relevance for all three concepts of space, time and valence. However, the strength of this functional relevance depends on how close words are linked to mental concepts representing vertical space.
The extent of gene flow during the range expansion of non-native species influences the amount of genetic diversity retained in expanding populations. Here, we analyse the population genetic structure of the raccoon dog (Nyctereutes procyonoides) in north-eastern and central Europe. This invasive species is of management concern because it is highly susceptible to fox rabies and an important secondary host of the virus. We hypothesized that the large number of introduced animals and the species' dispersal capabilities led to high population connectivity and maintenance of genetic diversity throughout the invaded range. We genotyped 332 tissue samples from seven European countries using 16 microsatellite loci. Different algorithms identified three genetic clusters corresponding to Finland, Denmark and a large 'central' population that reached from introduction areas in western Russia to northern Germany. Cluster assignments provided evidence of long-distance dispersal. The results of an Approximate Bayesian Computation analysis supported a scenario of equal effective population sizes among different pre-defined populations in the large central cluster. Our results are in line with strong gene flow and secondary admixture between neighbouring demes leading to reduced genetic structuring, probably a result of its fairly rapid population expansion after introduction. The results presented here are remarkable in the sense that we identified a homogenous genetic cluster inhabiting an area stretching over more than 1500km. They are also relevant for disease management, as in the event of a significant rabies outbreak, there is a great risk of a rapid virus spread among raccoon dog populations.
Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and uncertainty. This study compared analyses of crowdsourced land cover data that were contributed by different groups, based on nationality (labelled Gondor and Non-Gondor) and on domain experience (labelled Expert and Non-Expert). The analyses used a geographically weighted model to generate maps of land cover and compared the maps generated by the different groups. The results highlight the differences between the maps how specific land cover classes were under-and over-estimated. As crowdsourced data and citizen science are increasingly used to replace data collected under the designed experiment, this paper highlights the importance of considering between group variations and their impacts on the results of analyses. Critically, differences in the way that landscape features are conceptualised by different groups of contributors need to be considered when using crowdsourced data in formal scientific analyses. The discussion considers the potential for variation in crowdsourced data, the relativist nature of land cover and suggests a number of areas for future research. The key finding is that the veracity of citizen science data is not the critical issue per se. Rather, it is important to consider the impacts of differences in the semantics, affordances and functions associated with landscape features held by different groups of crowdsourced data contributors.
Exosomes are small membrane vesicles released by different cell types, including hepatocytes, that play important roles in intercellular communication. We have previously demonstrated that hepatocyte-derived exosomes contain the synthetic machinery to form sphingosine-1-phosphate (S1P) in target hepatocytes resulting in proliferation and liver regeneration after ischemia/reperfusion (I/R) injury. We also demonstrated that the chemokine receptors, CXCR1 and CXCR2, regulate liver recovery and regeneration after I/R injury. In the current study, we sought to determine if the regulatory effects of CXCR1 and CXCR2 on liver recovery and regeneration might occur via altered release of hepatocyte exosomes. We found that hepatocyte release of exosomes was dependent upon CXCR1 and CXCR2. CXCR1-deficient hepatocytes produced fewer exosomes, whereas CXCR2-deficient hepatocytes produced more exosomes compared to their wild-type controls. In CXCR2-deficient hepatocytes, there was increased activity of neutral sphingomyelinase (Nsm) and intracellular ceramide. CXCR1-deficient hepatocytes had no alterations in Nsm activity or ceramide production. Interestingly, exosomes from CXCR1-deficient hepatocytes had no effect on hepatocyte proliferation, due to a lack of neutral ceramidase and sphingosine kinase. The data demonstrate that CXCR1 and CXCR2 regulate hepatocyte exosome release. The mechanism utilized by CXCR1 remains elusive, but CXCR2 appears to modulate Nsm activity and resultant production of ceramide to control exosome release. CXCR1 is required for packaging of enzymes into exosomes that mediate their hepatocyte proliferative effect.
Processes involved in late bilinguals' production of morphologically complex words were studied using an event-related brain potentials (ERP) paradigm in which EEGs were recorded during participants' silent productions of English past- and present-tense forms. Twenty-three advanced second language speakers of English (first language [L1] German) were compared to a control group of 19 L1 English speakers from an earlier study. We found a frontocentral negativity for regular relative to irregular past-tense forms (e.g., asked vs. held) during (silent) production, and no difference for the present-tense condition (e.g., asks vs. holds), replicating the ERP effect obtained for the L1 group. This ERP effect suggests that combinatorial processing is involved in producing regular past-tense forms, in both late bilinguals and L1 speakers. We also suggest that this paradigm is a useful tool for future studies of online language production.
Antibodies against spike proteins of influenza are used as a tool for characterization of viruses and therapeutic approaches. However, development, production and quality control of antibodies is expensive and time consuming. To circumvent these difficulties, three peptides were derived from complementarity determining regions of an antibody heavy chain against influenza A spike glycoprotein. Their binding properties were studied experimentally, and by molecular dynamics simulations. Two peptide candidates showed binding to influenza A/Aichi/2/68 H3N2. One of them, termed PeB, with the highest affinity prevented binding to and infection of target cells in the micromolar region without any cytotoxic effect. PeB matches best the conserved receptor binding site of hemagglutinin. PeB bound also to other medical relevant influenza strains, such as human-pathogenic A/California/7/2009 H1N1, and avian-pathogenic A/MuteSwan/Rostock/R901/2006 H7N1. Strategies to improve the affinity and to adapt specificity are discussed and exemplified by a double amino acid substituted peptide, obtained by substitutional analysis. The peptides and their derivatives are of great potential for drug development as well as biosensing.
Fluxes of organic and inorganic carbon within the Amazon basin are considerably controlled by annual flooding, which triggers the export of terrigenous organic material to the river and ultimately to the Atlantic Ocean. The amount of carbon imported to the river and the further conversion, transport and export of it depend on temperature, atmospheric CO2, terrestrial productivity and carbon storage, as well as discharge. Both terrestrial productivity and discharge are influenced by climate and land use change. The coupled LPJmL and RivCM model system (Langerwisch et al., 2016) has been applied to assess the combined impacts of climate and land use change on the Amazon riverine carbon dynamics. Vegetation dynamics (in LPJmL) as well as export and conversion of terrigenous carbon to and within the river (RivCM) are included. The model system has been applied for the years 1901 to 2099 under two deforestation scenarios and with climate forcing of three SRES emission scenarios, each for five climate models. We find that high deforestation (business-as-usual scenario) will strongly decrease (locally by up to 90 %) riverine particulate and dissolved organic carbon amount until the end of the current century. At the same time, increase in discharge leaves net carbon transport during the first decades of the century roughly unchanged only if a sufficient area is still forested. After 2050 the amount of transported carbon will decrease drastically. In contrast to that, increased temperature and atmospheric CO2 concentration determine the amount of riverine inorganic carbon stored in the Amazon basin. Higher atmospheric CO2 concentrations increase riverine inorganic carbon amount by up to 20% (SRES A2). The changes in riverine carbon fluxes have direct effects on carbon export, either to the atmosphere via outgassing or to the Atlantic Ocean via discharge. The outgassed carbon will increase slightly in the Amazon basin, but can be regionally reduced by up to 60% due to deforestation. The discharge of organic carbon to the ocean will be reduced by about 40% under the most severe deforestation and climate change scenario. These changes would have local and regional consequences on the carbon balance and habitat characteristics in the Amazon basin itself as well as in the adjacent Atlantic Ocean.
The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km x 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
Climate change increases riverine carbon outgassing, while export to the ocean remains uncertain
(2016)
Any regular interaction of land and river during flooding affects carbon pools within the terrestrial system, riverine carbon and carbon exported from the system. In the Amazon basin carbon fluxes are considerably influenced by annual flooding, during which terrigenous organic material is imported to the river. The Amazon basin therefore represents an excellent example of a tightly coupled terrestrial-riverine system. The processes of generation, conversion and transport of organic carbon in such a coupled terrigenous-riverine system strongly interact and are climate-sensitive, yet their functioning is rarely considered in Earth system models and their response to climate change is still largely unknown. To quantify regional and global carbon budgets and climate change effects on carbon pools and carbon fluxes, it is important to account for the coupling between the land, the river, the ocean and the atmosphere. We developed the RIVerine Carbon Model (RivCM), which is directly coupled to the well-established dynamic vegetation and hydrology model LPJmL, in order to account for this large-scale coupling. We evaluate RivCM with observational data and show that some of the values are reproduced quite well by the model, while we see large deviations for other variables. This is mainly caused by some simplifications we assumed. Our evaluation shows that it is possible to reproduce large-scale carbon transport across a river system but that this involves large uncertainties. Acknowledging these uncertainties, we estimate the potential changes in riverine carbon by applying RivCM for climate forcing from five climate models and three CO2 emission scenarios (Special Report on Emissions Scenarios, SRES). We find that climate change causes a doubling of riverine organic carbon in the southern and western basin while reducing it by 20% in the eastern and northern parts. In contrast, the amount of riverine inorganic carbon shows a 2- to 3-fold increase in the entire basin, independent of the SRES scenario. The export of carbon to the atmosphere increases as well, with an average of about 30 %. In contrast, changes in future export of organic carbon to the Atlantic Ocean depend on the SRES scenario and are projected to either decrease by about 8.9% (SRES A1B) or increase by about 9.1% (SRES A2). Such changes in the terrigenous-riverine system could have local and regional impacts on the carbon budget of the whole Amazon basin and parts of the Atlantic Ocean. Changes in riverine carbon could lead to a shift in the riverine nutrient supply and pH, while changes in the exported carbon to the ocean lead to changes in the supply of organic material that acts as a food source in the Atlantic. On larger scales the increased outgassing of CO2 could turn the Amazon basin from a sink of carbon to a considerable source. Therefore, we propose that the coupling of terrestrial and riverine carbon budgets should be included in subsequent analysis of the future regional carbon budget.
The present study aimed to integrate findings from technology acceptance research with research on applicant reactions to new technology for the emerging selection procedure of asynchronous video interviewing. One hundred six volunteers experienced asynchronous video interviewing and filled out several questionnaires including one on the applicants' personalities. In line with previous technology acceptance research, the data revealed that perceived usefulness and perceived ease of use predicted attitudes toward asynchronous video interviewing. Furthermore, openness revealed to moderate the relation between perceived usefulness and attitudes toward this particular selection technology. No significant effects emerged for computer self-efficacy, job interview self efficacy, extraversion, neuroticism, and conscientiousness. Theoretical and practical implications are discussed.
In low-accumulation regions, the reliability of delta O-18-derived temperature signals from ice cores within the Holocene is unclear, primarily due to the small climate changes relative to the intrinsic noise of the isotopic signal. In order to learn about the representativity of single ice cores and to optimise future ice-core-based climate reconstructions, we studied the stable-water isotope composition of firn at Kohnen Station, Dronning Maud Land, Antarctica. Analysing delta O-18 in two 50m long snow trenches allowed us to create an unprecedented, two-dimensional image characterising the isotopic variations from the centimetre to the 100-metre scale. Our results show seasonal layering of the isotopic composition but also high horizontal isotopic variability caused by local stratigraphic noise. Based on the horizontal and vertical structure of the isotopic variations, we derive a statistical noise model which successfully explains the trench data. The model further allows one to determine an upper bound for the reliability of climate reconstructions conducted in our study region at seasonal to annual resolution, depending on the number and the spacing of the cores taken.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
Ongoing climate change is known to cause an increase in the frequency and amplitude of local temperature and precipitation extremes in many regions of the Earth. While gradual changes in the climatological conditions have already been shown to strongly influence plant flowering dates, the question arises if and how extremes specifically impact the timing of this important phenological phase. Studying this question calls for the application of statistical methods that are tailored to the specific properties of event time series. Here, we employ event coincidence analysis, a novel statistical tool that allows assessing whether or not two types of events exhibit similar sequences of occurrences in order to systematically quantify simultaneities between meteorological extremes and the timing of the flowering of four shrub species across Germany. Our study confirms previous findings of experimental studies by highlighting the impact of early spring temperatures on the flowering of the investigated plants. However, previous studies solely based on correlation analysis do not allow deriving explicit estimates of the strength of such interdependencies without further assumptions, a gap that is closed by our analysis. In addition to direct impacts of extremely warm and cold spring temperatures, our analysis reveals statistically significant indications of an influence of temperature extremes in the autumn preceding the flowering.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.
Brief communication
(2016)
In March 2015, a new international blueprint for disaster risk reduction (DRR) was adopted in Sendai, Japan, at the end of the Third UN World Conference on Disaster Risk Reduction (WCDRR, 14-18 March 2015). We review and discuss the agreed commitments and targets, as well as the negotiation leading the Sendai Framework for DRR (SF-DRR) and discuss briefly its implication for the later UN-led negotiations on sustainable development goals and climate change.
It has been proposed that in online sentence comprehension the dependency between a reflexive pronoun such as himself/herself and its antecedent is resolved using exclusively syntactic constraints. Under this strictly syntactic search account, Principle A of the binding theory which requires that the antecedent c-command the reflexive within the same clause that the reflexive occurs in constrains the parser's search for an antecedent. The parser thus ignores candidate antecedents that might match agreement features of the reflexive (e.g., gender) but are ineligible as potential antecedents because they are in structurally illicit positions. An alternative possibility accords no special status to structural constraints: in addition to using Principle A, the parser also uses non-structural cues such as gender to access the antecedent. According to cue -based retrieval theories of memory (e.g., Lewis and Vasishth, 2005), the use of non-structural cues should result in increased retrieval times and occasional errors when candidates partially match the cues, even if the candidates are in structurally illicit positions. In this paper, we first show how the retrieval processes that underlie the reflexive binding are naturally realized in the Lewis and Vasishth (2005) model. We present the predictions of the model under the assumption that both structural and non-structural cues are used during retrieval, and provide a critical analysis of previous empirical studies that failed to find evidence for the use of non-structural cues, suggesting that these failures may be Type II errors. We use this analysis and the results of further modeling to motivate a new empirical design that we use in an eye tracking study. The results of this study confirm the key predictions of the model concerning the use of non-structural cues, and are inconsistent with the strictly syntactic search account. These results present a challenge for theories advocating the infallibility of the human parser in the case of reflexive resolution, and provide support for the inclusion of agreement features such as gender in the set of retrieval cues.
This paper will turn into a contribution to a book on community obligations. It focusses on third parties' rights and obligations in armed conflict.
It is often said that international law has developed from a legal order which is designed to protect sovereignty to a system which also promotes community interests. This shift is said to be reflected in structural changes of the legal system. The creation of rights and obligations for third parties is generally seen as a part of this perceived paradigmatic shift. Community interests can be furthered either by negative duties of abstention, by an entitlement for third states, or even by duties to take positive measures. Since the shift towards protecting community interests apparently requires some form of cooperation, positive rights and duties to protect and to promote appear to be indispensable. Authors relying on a community perspective often dismiss duties of abstention as an expression of indifference in the face of a violation of a fundamental norm. Solidarity seems to require that third states take a more proactive role in actively enforcing community interests.
The paper aims to test this understanding on the basis of an analysis of rights and obligations of third states in armed conflict. In order to argue that duties of abstention of third states are a central instrument for promoting community interests in relation to armed conflicts, the paper will first trace pertinent structural changes in international law. In particular, it will question the extent to which positive rights and obligations of third states have been firmly established in international law. In a second step, this contribution will evaluate the overall tendencies in the ongoing lawmaking process for promoting community interests in relation to armed conflict.
Caribbean States organised in CARICOM recently brought forward reparation claims against several European States to compensate slavery and (native) genocides in the Caribbean and even threatened to approach the International Court of Justice. The paper provides for an analysis of the facts behind the CARICOM claim and asks whether the law of state responsibility is able to provide for the demanded compensation. As the intertemporal principle generally prohibits retroactive application of today’s international rules, the paper argues that the complete claim must be based on the law of state responsibility governing in the time of the respective conduct. An inquiry into the history of primary (prohibition of slavery and genocide) as well as secondary rules of State responsibility reveals that both sets of rules were underdeveloped or non-existent at the times of slavery and alleged (native) genocides. Therefore, the author concludes that the CARICOM claim is legally flawed but nevertheless worth the attention as it once again exposes imperial and colonial injustices of the past and their legitimization by historical international law and international/natural lawyers.
Das dritte Working Paper in der KFG Working Paper Series analysiert Zustand und Perspektiven völkerrechtlicher Abrüstungsverträge unter der Ägide der Vereinten Nationen. Während die dreißig Jahre zwischen der Kuba-Krise und dem Fall des Eisernen Vorhangs für die Abrüstung eine erfolgreiche Periode gewesen seien, seien in den Vereinten Nationen seither außer dem Waffenhandelsvertrag keine weiteren Abrüstungsverträge abgeschlossen worden. Die gegenwärtige Stimmung sei abwartend bis negativ, obwohl es ein Nachholbedürfnis gebe, Abrüstungsverträge an die heutigen politischen Gegebenheiten sowie an den Stand der Technik anzupassen. Die Verfasserin schlägt als Lösung vor, durch eine Politik der kleinen Schritte ein besseres Abrüstungsklima zu schaffen, indem dem Diskurs auf Grundlage zusätzlicher Protokolle zu bestehenden Verträgen und notfalls auch durch ein Ausweichen auf andere Gremien eine neue Richtung verliehen werde.
This article re-examines the relationship between Africa and the International Criminal Court (ICC). It traces the successive changes of the African attitude towards this Court, from states' euphoria, to hostility against its work, to regional counter-initiatives through the umbrella of the African Union (AU). The main argument goes beyond the idea of "the Court that Africa wants" in order to identify concrete reasons behind such a formal argument which may have fostered, if not enticed, the majority of African states to become ICC members and actively cooperate with it, when paradoxically some great powers have decided to stay outside its jurisdiction. It also seeks to understand, from a political and legal viewpoint, which parameters have changed since then to provoke that hostile attitude against the Court's work and the entrance of the AU into the debate through the African Common Position on the ICC. Lastly, this article explores African alternatives to the contested ICC justice system. It examines the need to reform the Rome Statute in order to give more independence, credibility and legitimacy to the ICC and its duplication to some extent by the new "Criminal Court of the African Union". Particular attention is paid to the resistance against this idea to reform the ICC justice system.
The paper undertakes a preliminary assessment of current developments of international law for the purpose of mapping the ground for a larger research project. The research project pursues the goal of determining whether public international law, as it has developed since the end of the Cold War, is continuing its progressive move towards a more human-rights- and multi-actor-oriented order, or whether we are seeing a renewed emphasis of more classical elements of international law. In this context the term “international rule of law” is chosen to designate the more recent and “thicker” understanding of international law. The paper discusses how it can be determined whether this form of international law continues to unfold, and whether we are witnessing challenges to this order which could give rise to more fundamental reassessments.
Do properties of individual languages shape the mechanisms by which they are processed? By virtue of their non-concatenative morphological structure, the recognition of complex words in Semitic languages has been argued to rely strongly on morphological information and on decomposition into root and pattern constituents. Here, we report results from a masked priming experiment in Hebrew in which we contrasted verb forms belonging to two morphological classes, Paal and Piel, which display similar properties, but crucially differ on whether they are extended to novel verbs. Verbs from the open-class Piel elicited familiar root priming effects, but verbs from the closed-class Paal did not. Our findings indicate that, similarly to other (e.g., Indo-European) languages, down-to-the-root decomposition in Hebrew does not apply to stems of non-productive verbal classes. We conclude that the Semitic word processor is less unique than previously thought: Although it operates on morphological units that are combined in a non-linear way, it engages the same universal mechanisms of storage and computation as those seen in other languages.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Changes in free symptom attributions in hypochondriasis after cognitive therapy and exposure therapy
(2016)
Background: Cognitive-behavioural therapy can change dysfunctional symptom attributions in patients with hypochondriasis. Past research has used forced-choice answer formats, such as questionnaires, to assess these misattributions; however, with this approach, idiosyncratic attributions cannot be assessed. Free associations are an important complement to existing approaches that assess symptom attributions. Aims: With this study, we contribute to the current literature by using an open-response instrument to investigate changes in freely associated attributions after exposure therapy (ET) and cognitive therapy (CT) compared with a wait list (WL). Method: The current study is a re-examination of a formerly published randomized controlled trial (Weck, Neng, Richtberg, Jakob and Stangier, 2015) that investigated the effectiveness of CT and ET. Seventy-three patients with hypochondriasis were randomly assigned to CT, ET or a WL, and completed a 12-week treatment (or waiting period). Before and after the treatment or waiting period, patients completed an Attribution task in which they had to spontaneously attribute nine common bodily sensations to possible causes in an open-response format. Results: Compared with the WL, both CT and ET reduced the frequency of somatic attributions regarding severe diseases (CT: Hedges's g = 1.12; ET: Hedges's g = 1.03) and increased the frequency of normalizing attributions (CT: Hedges's g = 1.17; ET: Hedges's g = 1.24). Only CT changed the attributions regarding moderate diseases (Hedges's g = 0.69). Changes in somatic attributions regarding mild diseases and psychological attributions were not observed. Conclusions: Both CT and ET are effective for treating freely associated misattributions in patients with hypochondriasis. This study supplements research that used a forced-choice assessment.
Much research on language control in bilinguals has relied on the interpretation of the costs of switching between two languages. Of the two types of costs that are linked to language control, switching costs are assumed to be transient in nature and modulated by trial-specific manipulations (e.g., by preparation time), while mixing costs are supposed to be more stable and less affected by trial-specific manipulations. The present study investigated the effect of preparation time on switching and mixing costs, revealing that both types of costs can be influenced by trial-specific manipulations.
Rhythm perception is assumed to be guided by a domain-general auditory principle, the Iambic/Trochaic Law, stating that sounds varying in intensity are grouped as strong-weak, and sounds varying in duration are grouped as weak-strong. Recently, Bhatara et al. (2013) showed that rhythmic grouping is influenced by native language experience, French listeners having weaker grouping preferences than German listeners. This study explores whether L2 knowledge and musical experience also affect rhythmic grouping. In a grouping task, French late learners of German listened to sequences of coarticulated syllables varying in either intensity or duration. Data on their language and musical experience were obtained by a questionnaire. Mixed-effect model comparisons showed influences of musical experience as well as L2 input quality and quantity on grouping preferences. These results imply that adult French listeners' sensitivity to rhythm can be enhanced through L2 and musical experience.
Background: Dementia is a psychiatric condition the development of which is associated with numerous aspects of life. Our aim was to estimate dementia risk factors in German primary care patients.
Methods: The case-control study included primary care patients (70-90 years) with first diagnosis of dementia (all-cause) during the index period (01/2010-12/2014) (Disease Analyzer, Germany), and controls without dementia matched (1:1) to cases on the basis of age, sex, type of health insurance, and physician. Practice visit records were used to verify that there had been 10 years of continuous follow-up prior to the index date. Multivariate logistic regression models were fitted with dementia as a dependent variable and the potential predictors.
Results: The mean age for the 11,956 cases and the 11,956 controls was 80.4 (SD: 5.3) years. 39.0% of them were male and 1.9% had private health insurance. In the multivariate regression model, the following variables were linked to a significant extent with an increased risk of dementia: diabetes (OR: 1.17; 95% CI: 1.10-1.24), lipid metabolism (1.07; 1.00-1.14), stroke incl. TIA (1.68; 1.57-1.80), Parkinson's disease (PD) (1.89; 1.64-2.19), intracranial injury (1.30; 1.00-1.70), coronary heart disease (1.06; 1.00-1.13), mild cognitive impairment (MCI) (2.12; 1.82-2.48), mental and behavioral disorders due to alcohol use (1.96; 1.50-2.57). The use of statins (OR: 0.94; 0.90-0.99), proton-pump inhibitors (PPI) (0.93; 0.90-0.97), and antihypertensive drugs (0.96, 0.94-0.99) were associated with a decreased risk of developing dementia.
Conclusions: Risk factors for dementia found in this study are consistent with the literature. Nevertheless, the associations between statin, PPI and antihypertensive drug use, and decreased risk of dementia need further investigations.
An egalitarian approach to the fair representation of voters specifies three main institutional requirements: proportional representation, legislative majority rule and a parliamentary system of government. This approach faces two challenges: the under-determination of the resulting democratic process and the idea of a trade-off between equal voter representation and government accountability. Linking conceptual with comparative analysis, the article argues that we can distinguish three ideal-typical varieties of the egalitarian vision of democracy, based on the stages at which majorities are formed. These varieties do not put different relative normative weight onto equality and accountability, but have different conceptions of both values and their reconciliation. The view that accountability is necessarily linked to clarity of responsibility', widespread in the comparative literature, is questioned - as is the idea of a general trade-off between representation and accountability. Depending on the vision of democracy, the two values need not be in conflict.
Background: Given the well-established association between perceived stress and quality of life (QoL) in dementia patients and their partners, our goal was to identify whether relationship quality and dyadic coping would operate as mediators between perceived stress and QoL.
Methods: 82 dyads of dementia patients and their spousal caregivers were included in a cross-sectional assessment from a prospective study. QoL was assessed with the Quality of Life in Alzheimer's Disease scale (QoL-AD) for dementia patients and the WHO Quality of Life-BREF for spousal caregivers. Perceived stress was measured with the Perceived Stress Scale (PSS-14). Both partners were assessed with the Dyadic Coping Inventory (DCI). Analyses of correlation as well as regression models including mediator analyses were performed.
Results: We found negative correlations between stress and QoL in both partners (QoL-AD: r = -0.62; p < 0.001; WHO-QOL Overall: r = -0.27; p = 0.02). Spousal caregivers had a significantly lower DCI total score than dementia patients (p < 0.001). Dyadic coping was a significant mediator of the relationship between stress and QoL in spousal caregivers (z = 0.28; p = 0.02), but not in dementia patients. Likewise, relationship quality significantly mediated the relationship between stress and QoL in caregivers only (z = -2.41; p = 0.02).
Conclusions: This study identified dyadic coping as a mediator on the relationship between stress and QoL in (caregiving) partners of dementia patients. In patients, however, we found a direct negative effect of stress on QoL. The findings suggest the importance of stress reducing and dyadic interventions for dementia patients and their partners, respectively.