Refine
Year of publication
- 2018 (2201) (remove)
Document Type
- Article (1450)
- Postprint (274)
- Other (182)
- Doctoral Thesis (154)
- Review (81)
- Working Paper (18)
- Monograph/Edited Volume (12)
- Part of a Book (10)
- Conference Proceeding (6)
- Habilitation Thesis (6)
Language
- English (2201) (remove)
Keywords
- climate change (20)
- gamma rays: general (17)
- cosmic rays (11)
- stars: massive (11)
- Germany (10)
- ISM: supernova remnants (10)
- adaptation (10)
- permafrost (10)
- German (9)
- inflammation (9)
Institute
- Institut für Biochemie und Biologie (315)
- Institut für Geowissenschaften (311)
- Institut für Physik und Astronomie (308)
- Institut für Chemie (180)
- Mathematisch-Naturwissenschaftliche Fakultät (134)
- Department Psychologie (108)
- Hasso-Plattner-Institut für Digital Engineering GmbH (89)
- Institut für Ernährungswissenschaft (84)
- Institut für Umweltwissenschaften und Geographie (82)
- Department Sport- und Gesundheitswissenschaften (81)
In this article the author proposes a new reading for the opening words of the Bible, "In the beginning God created the heaven and the earth. Now the earth was unformed and void ... ; and the spirit of God hovered over the water" (Gen. 1:1-2). This new reading is based on the connections drawn by Otto Eissfeldt between the Ugaritic literature and the Bible. God, according to this opening picture, connects intimately, empathetically, with the existing matter (the tehom) in dialogic address. It is from this relationship, which today we call "love," that all comes to be "born" from the material "womb" of the tehom. From this "big bang," all continues to be born.
This study aimed to estimate the optimal body size, limb segment length, and girth or breadth ratios of 100-m breaststroke performance in youth swimmers. In total, 59 swimmers [male: n= 39, age = 11.5 (1.3) y; female: n= 20, age = 12.0 (1.0) y] participated in this study. To identify size/shape characteristics associated with 100-m breaststroke swimming performance, we computed a multiplicative allometric log-linear regression model, which was refined using backward elimination. Results showed that the 100-m breaststroke performance revealed a significant negative association with fat mass and a significant positive association with the segment length ratio (arm ratio = hand length/forearm length) and limb girth ratio (girth ratio = forearm girth/wrist girth). In addition, leg length, biacromial breadth, and biiliocristal breadth revealed significant positive associations with the 100-m breaststroke performance. However, height and body mass did not contribute to the model, suggesting that the advantage of longer levers was limb-specific rather than a general whole-body advantage. In fact, it is only by adopting multiplicative allometric models that the previously mentioned ratios could have been derived. These results highlighted the importance of considering anthropometric characteristics of youth breaststroke swimmers for talent identification and/or athlete monitoring purposes. In addition, these findings may assist orienting swimmers to the appropriate stroke based on their anthropometric characteristics.
The scientific drilling campaign PALEOVAN was conducted in the summer of 2010 and was part of the international continental drilling programme (ICDP). The main goal of the campaign was the recovery of a sensitive climate archive in the East of Anatolia. Lacustrine deposits underneath the lake floor of ‘Lake Van’ constitute this archive. The drilled core material was recovered from two locations: the Ahlat Ridge and the Northern Basin. A composite core was constructed from cored material of seven parallel boreholes at the Ahlat Ridge and covers an almost complete lacustrine history of Lake Van. The composite record offered sensitive climate proxies such as variations of total organic carbon, K/Ca ratios, or a relative abundance of arboreal pollen. These proxies revealed patterns that are similar to climate proxy variations from Greenland ice cores. Climate variations in Greenland ice cores have been dated by modelling the timing of orbital forces to affect the climate. Volatiles from melted ice aliquots are often taken as high-resolution proxies and provide a base for fitting the according temporal models.
The ICDP PALEOVAN scientific team fitted proxy data from the lacustrine drilling record to ice core data and constructed an age model. Embedded volcaniclastic layers had to be dated radiometrically in order to provide independent age constraints to the climate-stratigraphic age model. Solving this task by an application of the 40Ar/39Ar method was the main objective of this thesis. Earlier efforts to apply the 40Ar/39Ar dating resulted in inaccuracies that could not be explained satisfactorily.
The absence of K-rich feldspars in suitable tephra layers implied that feldspar crystals needed to be 500 μm in size minimum, in order to apply single-crystal 40Ar/39Ar dating. Some of the samples did not contain any of these grain sizes or only very few crystals of that size. In order to overcome this problem this study applied a combined single-crystal and multi-crystal approach with different crystal fractions from the same sample. The preferred method of a stepwise heating analysis of an aliquot of feldspar crystals has been applied to three samples. The Na-rich crystals and their young geological age required 20 mg of inclusion-free, non-corroded feldspars. Small sample volumes (usually 25 % aliquots of 5 cm3 of sample material – a spoon full of tephra) and the widespread presence of melt-inclusion led to the application of combined single- and multigrain total fusion analyses. 40Ar/39Ar analyses on single crystals have the advantage of being able to monitor the presence of excess 40Ar and detrital or xenocrystic contamination in the samples. Multigrain analyses may hide the effects from these obstacles. The results from the multigrain analyses are therefore discussed with respect to the findings from the respective cogenetic single crystal ages. Some of the samples in this study were dated by 40Ar/39Ar on feldspars on multigrain separates and (if available) in combination with only a few single crystals. 40Ar/39Ar ages from two of the samples deviated statistically from the age model. All other samples resulted in identical ages. The deviations displayed older ages than those obtained from the age model. t-Tests compared radiometric ages with available age control points from various proxies and from the relative paleointensity of the earth magnetic field within a stratigraphic range of ± 10 m. Concordant age control points from different relative chronometers indicated that deviations are a result of erroneous 40Ar/39Ar ages. The thesis argues two potential reasons for these ages: (1) the irregular appearance of 40Ar from rare melt- and fluid- inclusions and (2) the contamination of the samples with older crystals due to a rapid combination of assimilation and ejection.
Another aliquot of feldspar crystals that underwent separation for the application of 40Ar/39Ar dating was investigated for geochemical inhomogeneities. Magmatic zoning is ubiquitous in the volcaniclastic feldspar crystals. Four different types of magmatic zoning were detected. The zoning types are compositional zoning (C-type zoning), pseudo-oscillatory zoning of trace ele- ment concentrations (PO-type zoning), chaotic and patchy zoning of major and trace element concentrations (R-type zoning) and concentric zoning of trace elements (CC-type zoning). Sam- ples that deviated in 40Ar/39Ar ages showed C-type zoning, R-type zoning or a mix of different types of zoning (C-type and PO-type). Feldspars showing PO-type zoning typically represent the smallest grain size fractions in the samples. The constant major element compositions of these crystals are interpreted to represent the latest stages in the compositional evolution of feldspars in a peralkaline melt. PO-type crystals contain less melt- inclusions than other zoning types and are rarely corroded. This thesis concludes that feldspars that show PO-type zoning are most promising chronometers for the 40Ar/39Ar method, if samples provide mixed zoning types of Quaternary anorthoclase feldspars.
Five samples were dated by applying the 40Ar/39Ar method to volcanic glass. High fractions of atmospheric Ar (typically > 98%) significantly hampered the precision of the 40Ar/39Ar ages and resulted in rough age estimates that widely overlap the age model. Ar isotopes indicated that the glasses bore a chorine-rich Ar-end member. The chlorine-derived 38Ar indicated chlorine-rich fluid-inclusions or the hydration of the volcanic glass shards. This indication strengthened the evidence that irregularly distributed melt-inclusions and thus irregular distributed excess 40Ar influenced the problematic feldspar 40Ar/39Ar ages. Whether a connection between a corrected initial 40Ar/36Ar ratio from glasses to the 40Ar/36Ar ratios from pore waters exists remains unclear.
This thesis offers another age model, which is similarly based on the interpolation of the temporal tie points from geophysical and climate-stratigraphic data. The model used a PCHIP- interpolation (piecewise cubic hermite interpolating polynomial) whereas the older age model used a spline-interpolation. Samples that match in ages from 40Ar/39Ar dating of feldspars with the earlier published age model were additionally assigned with an age from the PCHIP- interpolation. These modelled ages allowed a recalculation of the Alder Creek sanidine mineral standard. The climate-stratigraphic calibration of an 40Ar/39Ar mineral standard proved that the age versus depth interpolations from PAELOVAN drilling cores were accurate, and that the applied chronometers recorded the temporal evolution of Lake Van synchronously.
Petrochemical discrimination of the sampled volcaniclastic material is also given in this thesis. 41 from 57 sampled volcaniclastic layers indicate Nemrut as their provenance. Criteria that served for the provenance assignment are provided and reviewed critically. Detailed correlations of selected PALEOVAN volcaniclastics to onshore samples that were described in detail by earlier studies are also discussed. The sampled volcaniclastics dominantly have a thickness of < 40 cm and have been ejected by small to medium sized eruptions. Onshore deposits from these types of eruptions are potentially eroded due to predominant strong winds on Nemrut and Süphan slopes. An exact correlation with the data presented here is therefore equivocal or not possible at all.
Deviating feldspar 40Ar/39Ar ages can possibly be explained by inherited 40Ar from feldspar xenocrysts contaminating the samples. In order to test this hypothesis diffusion couples of Ba were investigated in compositionally zoned feldspar crystals. The diffusive behaviour of Ba in feldspar is known, and gradients in the changing concentrations allowed for the calculation of the duration of the crystal’s magmatic development since the formation of the zoning interface. Durations were compared with degassing scenarios that model the Ar-loss during assimilation and subsequent ejection of the xenocrystals. Diffusive equilibration of the contrasting Ba concentrations is assumed to generate maximum durations as the gradient could have been developed in several growth and heating stages. The modelling does not show any indication of an involvement of inherited 40Ar in any of the deviating samples. However, the analytical set-up represents the lower limit of the required spatial resolution. Therefore, it cannot be excluded that the degassing modelling relies on a significant overestimation of the maximum duration of the magmatic history. Nevertheless, the modelling of xenocrystal degassing evidences that the irregular incorporation of excess 40Ar by melt- and fluid inclusions represents the most critical problem that needs to be overcome in dating volcaniclastic feldspars from the PALEOVAN drill cores. This thesis provides the complete background in generating and presenting 40Ar/39Ar ages that are compared to age data from a climate-stratigraphic model. Deviations are identified statistically and then discussed in order to find explanations from the age model and/or from 40Ar/39Ar geochronology. Most of the PALEOVAN stratigraphy provides several chronometers that have been proven for their synchronicity. Lacustrine deposits from Lake Van represent a key archive for reconstructing climate evolution in the eastern Mediterranean and in the Near East. The PALEOVAN record offers a climate-stratigraphic age model with a remarkable accuracy and resolution.
The Gongjue basin from the eastern Qiangtang terrane is located in the transition region where the regional structural lineation curves from east-west-oriented in Tibet to north-south-oriented in Yunnan. In this study, we sampled the red beds in the basin from the lower Gongjue to upper Ranmugou formations for the first time covering the entire stratigraphic profile. The stratigraphic ages are bracketed within 53-43Ma by new detrital zircon U-Pb ages constraining the maximum deposition age to 52.51.5Ma. Rock magnetic and petrographic studies indicate that detrital magnetite and hematite are the magnetic carriers. Positive reversals and fold tests demonstrate that the characteristic remanent magnetization has a primary origin. The Gongjue and Ranmugou formations yield mean characteristic remanent magnetization directions of D-s/I-s=31.0 degrees/21.3 degrees and D-s/I-s=15.9 degrees/22.0 degrees, respectively. The magnetic inclination of these characteristic remanent magnetizations is significantly shallowed compared to the expected inclination for the locality. However, the elongation/inclination correction method does not provide a meaningful correction, likely because of syn-depositional rotation. Rotations relative to the Eurasian apparent polar wander path occurred in three stages: Stage I, 33.33.4 degrees clockwise rotation during the deposition of the Gongjue and lower Ranmugou formations; Stage II, 26.93.7 degrees counterclockwise rotation during deposition of the lower and middle Ranmugou formation; and Stage III, 17.73.3 degrees clockwise rotation after 43Ma. The complex rotation history recorded in the basin is possibly linked to sinistral shear along the Qiangtang block during India indentation into Asia and the early stage of the extrusion of the northwestern Indochina blocks away from eastern Tibet.
A balance to death
(2018)
Leaf senescence plays a crucial role in nutrient recovery in late-stage plant development and requires vast transcriptional reprogramming by transcription factors such as ORESARA1 (ORE1). A proteolytic mechanism is now found to control ORE1 degradation, and thus senescence, during nitrogen starvation.
Ecological communities are complex adaptive systems that exhibit remarkable feedbacks between their biomass and trait dynamics. Trait-based aggregate models cope with this complexity by focusing on the temporal development of the community’s aggregate properties such as its total biomass, mean trait and trait variance. They are based on particular assumptions about the shape of the underlying trait distribution, which is commonly assumed to be normal. However, ecologically important traits are usually restricted to a finite range, and empirical trait distributions are often skewed or multimodal. As a result, normal distribution-based aggregate models may fail to adequately represent the biomass and trait dynamics of natural communities. We resolve this mismatch by developing a new moment closure approach assuming the trait values to be beta-distributed. We show that the beta distribution captures important shape properties of both observed and simulated trait distributions, which cannot be captured by a Gaussian. We further demonstrate that a beta distribution-based moment closure can strongly enhance the reliability of trait-based aggregate models. We compare the biomass, mean trait and variance dynamics of a full trait distribution (FD) model to the ones of beta (BA) and normal (NA) distribution-based aggregate models, under different selection regimes. This way, we demonstrate under which general conditions (stabilizing, fluctuating or disruptive selection) different aggregate models are reliable tools. All three models predicted very similar biomass and trait dynamics under stabilizing selection yielding unimodal trait distributions with small standing trait variation. We also obtained an almost perfect match between the results of the FD and BA models under fluctuating selection, promoting skewed trait distributions and ongoing oscillations in the biomass and trait dynamics. In contrast, the NA model showed unrealistic trait dynamics and exhibited different alternative stable states, and thus a high sensitivity to initial conditions under fluctuating selection. Under disruptive selection, both aggregate models failed to reproduce the results of the FD model with the mean trait values remaining within their ecologically feasible ranges in the BA model but not in the NA model. Overall, a beta distribution-based moment closure strongly improved the realism of trait-based aggregate models.
On 6 June 1982, Israel invaded Lebanon to fight the Palestinian Liberation Organization (PLO). Between August 1982 and February 1984, the US, France, Britain and Italy deployed a Multinational Force (MNF) to Beirut. Its task was to act as an interposition force to bolster the government and to bring peace to the people. The mission is often forgotten or merely remembered in context with the bombing of US Marines’ barracks. However, an analysis of the Italian contingent shows that the MNF was not doomed to fail and could accomplish its task when operational and diplomatic efforts were coordinated. The Italian commander in Beirut, General Franco Angioni, followed a successful approach that sustained neutrality, respectful behaviour and minimal force, which resulted in a qualified success of the Italian efforts.
On 6 June 1982, Israel invaded Lebanon to fight the Palestinian
Liberation Organization (PLO). Between August 1982 and February
1984, the US, France, Britain and Italy deployed a Multinational
Force (MNF) to Beirut. Its task was to act as an interposition force to
bolster the government and to bring peace to the people. The
mission is often forgotten or merely remembered in context with
the bombing of US Marines’ barracks. However, an analysis of the
Italian contingent shows that the MNF was not doomed to fail and
could accomplish its task when operational and diplomatic efforts
were coordinated. The Italian commander in Beirut, General Franco
Angioni, followed a successful approach that sustained neutrality,
respectful behaviour and minimal force, which resulted in a
qualified success of the Italian efforts.
Stochastically triggered photospheric light variations reaching similar to 40 mmag peak-to-valley amplitudes have been detected in the O8 Iaf supergiant V973 Scorpii as the outcome of 2 months of high-precision time-resolved photometric observations with the BRIght Target Explorer (BRITE) nanosatellites. The amplitude spectrum of the time series photometry exhibits a pronounced broad bump in the low-frequency regime (less than or similar to 0.9 d(-1)) where several prominent frequencies are detected. A time-frequency analysis of the observations reveals typical mode lifetimes of the order of 5-10 d. The overall features of the observed brightness amplitude spectrum of V973 Sco match well with those extrapolated from two-dimensional hydrodynamical simulations of convectively driven internal gravity waves randomly excited from deep in the convective cores of massive stars. An alternative or additional possible source of excitation from a sub-surface convection zone needs to be explored in future theoretical investigations.
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
A close call
(2018)
The present study investigated how lexical selection is influenced by the number of semantically related representations (semantic neighbourhood density) and their similarity (semantic distance) to the target in a speeded picture-naming task. Semantic neighbourhood density and similarity as continuous variables were used to assess lexical selection for which competitive and noncompetitive mechanisms have been proposed. Previous studies found mixed effects of semantic neighbourhood variables, leaving this issue unresolved. Here, we demonstrate interference of semantic neighbourhood similarity with less accurate naming responses and a higher likelihood of producing semantic errors and omissions over accurate responses for words with semantically more similar (closer) neighbours. No main effect of semantic neighbourhood density and no interaction between semantic neighbourhood density and similarity was found. We assessed further whether semantic neighbourhood density can affect naming performance if semantic neighbours exceed a certain degree of semantic similarity. Semantic similarity between the target and each neighbour was used to split semantic neighbourhood density into two different density variables: The number of semantically close neighbours versus distant neighbours. The results showed a significant effect of close, but not of distant, semantic neighbourhood density: Naming pictures of targets with more close semantic neighbours led to longer naming latencies, less accurate responses, and a higher likelihood for the production of semantic errors and omissions over accurate responses. The results show that word inherent semantic attributes such as semantic neighbourhood similarity and the number of coactivated close semantic neighbours modulate lexical selection supporting theories of competitive lexical processing.
To explore the genetic determinants of obesity and Type 2 diabetes (T2D), the German Center for Diabetes Research (DZD) conducted crossbreedings of the obese and diabetes-prone New Zealand Obese mouse strain with four different lean strains (B6, DBA, C3H, 129P2) that vary in their susceptibility to develop T2D. Genome-wide linkage analyses localized more than 290 quantitative trait loci (QTL) for obesity, 190 QTL for diabetes-related traits and 100 QTL for plasma metabolites in the out-cross populations. A computational framework was developed that allowed to refine critical regions and to nominate a small number of candidate genes by integrating reciprocal haplotype mapping and transcriptome data. The efficiency of the complex procedure was demonstrated for one obesity QTL. The genomic interval of 35 Mb with 502 annotated candidate genes was narrowed down to six candidates. Accordingly, congenic mice retained the obesity phenotype owing to an interval that contains three of the six candidate genes. Among these the phospholipase PLA2G4A exhibited an elevated expression in adipose tissue of obese human subjects and is therefore a critical regulator of the obesity locus. Together, our broad and complex approach demonstrates that combined- and comparative-cross analysis exhibits improved mapping resolution and represents a valid tool for the identification of disease genes.
The aim of this doctoral thesis was to establish a technique for the analysis of biomolecules with infrared matrix-assisted laser dispersion (IR-MALDI) ion mobility (IM) spectrometry. The main components of the work were the characterization of the IR-MALDI process, the development and characterization of different ion mobility spectrometers, the use of IR-MALDI-IM spectrometry as a robust, standalone spectrometer and the development of a collision cross-section estimation approach for peptides based on molecular dynamics and thermodynamic reweighting.
First, the IR-MALDI source was studied with atmospheric pressure ion mobility spectrometry and shadowgraphy. It consisted of a metal capillary, at the tip of which a self-renewing droplet of analyte solution was met by an IR laser beam. A relationship between peak shape, ion desolvation, diffusion and extraction pulse delay time (pulse delay) was established. First order desolvation kinetics were observed and related to peak broadening by diffusion, both influenced by the pulse delay. The transport mechanisms in IR-MALDI were then studied by relating different laser impact positions on the droplet surface to the corresponding ion mobility spectra. Two different transport mechanisms were determined: phase explosion due to the laser pulse and electrical transport due to delayed ion extraction. The velocity of the ions stemming from the phase explosion was then measured by ion mobility and shadowgraphy at different time scales and distances from the source capillary, showing an initially very high but rapidly decaying velocity. Finally, the anatomy of the dispersion plume was observed in detail with shadowgraphy and general conclusions over the process were drawn.
Understanding the IR-MALDI process enabled the optimization of the different IM spectrometers at atmospheric and reduced pressure (AP and RP, respectively). At reduced pressure, both an AP and an RP IR-MALDI source were used. The influence of the pulsed ion extraction parameters (pulse delay, width and amplitude) on peak shape, resolution and area was systematically studied in both AP and RP IM spectrometers and discussed in the context of the IR-MALDI process. Under RP conditions, the influence of the closing field and of the pressure was also examined for both AP and RP sources. For the AP ionization RP IM spectrometer, the influence of the inlet field (IF) in the source region was also examined. All of these studies led to the determination of the optimal analytical parameters as well as to a better understanding of the initial ion cloud anatomy.
The analytical performance of the spectrometer was then studied. Limits of detection (LOD) and linear ranges were determined under static and pulsed ion injection conditions and interpreted in the context of the IR-MALDI mechanism. Applications in the separation of simple mixtures were also illustrated, demonstrating good isomer separation capabilities and the advantages of singly charged peaks. The possibility to couple high performance liquid chromatography (HPLC) to IR-MALDI-IM spectrometry was also demonstrated. Finally, the reduced pressure spectrometer was used to study the effect of high reduced field strength on the mobility of polyatomic ions in polyatomic gases.
The last focus point was on the study of peptide ions. A dataset obtained with electrospray IM spectrometry was characterized and used for the calibration of a collision cross-section (CCS) determination method based on molecular dynamics (MD) simulations at high temperature. Instead of producing candidate structures which are evaluated one by one, this semi-automated method uses the simulation as a whole to determine a single average collision cross-section value by reweighting the CCS of a few representative structures. The method was compared to the intrinsic size parameter (ISP) method and to experimental results. Additional MD data obtained from the simulations was also used to further analyze the peptides and understand the experimental results, an advantage with regard to the ISP method. Finally, the CCS of peptide ions analyzed by IR-MALDI were also evaluated with both ISP and MD methods and the results compared to experiment, resulting in a first validation of the MD method. Thus, this thesis brings together the soft ionization technique that is IR-MALDI, which produces mostly singly charged peaks, with ion mobility spectrometry, which can distinguish between isomers, and a collision cross-section determination method which also provides structural information on the analyte at hand.
We present new radio/millimeter measurements of the hot magnetic star HR5907 obtained with the VLA and ALMA interferometers. We find that HR5907 is the most radio luminous early type star in the cm-mm band among those presently known. Its multi-wavelength radio light curves are strongly variable with an amplitude that increases with radio frequency. The radio emission can be explained by the populations of the non-thermal electrons accelerated in the current sheets on the outer border of the magnetosphere of this fast-rotating magnetic star. We classify HR5907 as another member of the growing class of strongly magnetic fast-rotating hot stars where the gyro-synchrotron emission mechanism efficiently operates in their magnetospheres. The new radio observations of HR5907 are combined with archival X-ray data to study the physical condition of its magnetosphere. The X-ray spectra of HR5907 show tentative evidence for the presence of non-thermal spectral component. We suggest that non-thermal X-rays originate a stellar X-ray aurora due to streams of non-thermal electrons impacting on the stellar surface. Taking advantage of the relation between the spectral indices of the X-ray power-law spectrum and the non-thermal electron energy distributions, we perform 3-D modelling of the radio emission for HR5907. The wavelength-dependent radio light curves probe magnetospheric layers at different heights above the stellar surface. A detailed comparison between simulated and observed radio light curves leads us to conclude that the stellar magnetic field of HR 5907 is likely non-dipolar, providing further indirect evidence of the complex magnetic field topology of HR5907.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 °C, 87 °C and 95 °C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
Home range estimation is routine practice in ecological research. While advances in animal tracking technology have increased our capacity to collect data to support home range analysis, these same advances have also resulted in increasingly autocorrelated data. Consequently, the question of which home range estimator to use on modern, highly autocorrelated tracking data remains open. This question is particularly relevant given that most estimators assume independently sampled data. Here, we provide a comprehensive evaluation of the effects of autocorrelation on home range estimation. We base our study on an extensive data set of GPS locations from 369 individuals representing 27 species distributed across five continents. We first assemble a broad array of home range estimators, including Kernel Density Estimation (KDE) with four bandwidth optimizers (Gaussian reference function, autocorrelated‐Gaussian reference function [AKDE], Silverman's rule of thumb, and least squares cross‐validation), Minimum Convex Polygon, and Local Convex Hull methods. Notably, all of these estimators except AKDE assume independent and identically distributed (IID) data. We then employ half‐sample cross‐validation to objectively quantify estimator performance, and the recently introduced effective sample size for home range area estimation ( N̂ area
) to quantify the information content of each data set. We found that AKDE 95% area estimates were larger than conventional IID‐based estimates by a mean factor of 2. The median number of cross‐validated locations included in the hold‐out sets by AKDE 95% (or 50%) estimates was 95.3% (or 50.1%), confirming the larger AKDE ranges were appropriately selective at the specified quantile. Conversely, conventional estimates exhibited negative bias that increased with decreasing N̂ area. To contextualize our empirical results, we performed a detailed simulation study to tease apart how sampling frequency, sampling duration, and the focal animal's movement conspire to affect range estimates. Paralleling our empirical results, the simulation study demonstrated that AKDE was generally more accurate than conventional methods, particularly for small N̂ area. While 72% of the 369 empirical data sets had >1,000 total observations, only 4% had an N̂ area >1,000, where 30% had an N̂ area <30. In this frequently encountered scenario of small N̂ area, AKDE was the only estimator capable of producing an accurate home range estimate on autocorrelated data.
Narcissists are assumed to lack the motivation and ability to share and understand the mental states of others. Prior empirical research, however, has yielded inconclusive findings and has differed with respect to the specific aspects of narcissism and socioemotional cognition that have been examined. Here, we propose a differentiated facet approach that can be applied across research traditions and that distinguishes between facets of narcissism (agentic vs. antagonistic) on the one hand, and facets of socioemotional cognition ability (SECA; self-perceived vs. actual) on the other. Using five nonclinical samples in two studies (total N = 602), we investigated the effect of facets of grandiose narcissism on aspects of socioemotional cognition across measures of affective and cognitive empathy, Theory of Mind, and emotional intelligence, while also controlling for general reasoning ability. Across both studies, agentic facets of narcissism were found to be positively related to perceived SECA, whereas antagonistic facets of narcissism were found to be negatively related to perceived SECA. However, both narcissism facets were negatively related to actual SECA. Exploratory condition-based regression analyses further showed that agentic narcissists had a higher directed discrepancy between perceived and actual SECA: They self-enhanced their socio-emotional capacities. Implications of these results for the multifaceted theoretical understanding of the narcissism-SECA link are discussed.
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction. The ACT-R based Lewis and Vasishth (2005) model is used to implement these three proposals. Slowed processing is implemented as slowed execution time of parse steps; intermittent deficiency as increased random noise in activation of elements in memory; and resource reduction as reduced spreading activation. As data, we considered subject vs. object relative sentences, presented in a self-paced listening modality to 56 individuals with aphasia (IWA) and 46 matched controls. The participants heard the sentences and carried out a picture verification task to decide on an interpretation of the sentence. These response accuracies are used to identify the best parameters (for each participant) that correspond to the three hypotheses mentioned above. We show that controls have more tightly clustered (less variable) parameter values than IWA; specifically, compared to controls, among IWA there are more individuals with slow parsing times, high noise, and low spreading activation. We find that (a) individual IWA show differential amounts of deficit along the three dimensions of slowed processing, intermittent deficiency, and resource reduction, (b) overall, there is evidence for all three sources of deficit playing a role, and (c) IWA have a more variable range of parameter values than controls. An important implication is that it may be meaningless to talk about sources of deficit with respect to an abstract verage IWA; the focus should be on the individual's differential degrees of deficit along different dimensions, and on understanding the causes of variability in deficit between participants.
BACKGROUND: Work capacity demands are a concept to describe which psychological capacities are required in a job. Assessing psychological work capacity demands is of specific importance when mental health problems at work endanger work ability. Exploring psychological work capacity demands is the basis for mental hazard analysis or rehabilitative action, e.g. in terms of work adjustment. OBJECTIVE: This is the first study investigating psychological work capacity demands in rehabilitation patients with and without mental disorders. METHODS: A structured interview on psychological work capacity demands (Mini-ICF-Work; Muschalla, 2015; Linden et al., 2015) was done with 166 rehabilitation patients of working age. All interviews were done by a state-licensed socio-medically trained psychotherapist. Inter-rater-reliability was assessed by determining agreement in independent co-rating in 65 interviews. For discriminant validity purposes, participants filled in the Short Questionnaire for Work Analysis (KFZA, Prumper et al., 1994). RESULTS: In different professional fields, different psychological work capacity demands were of importance. The Mini-ICF-Work capacity dimensions reflect different aspects than the KFZA. Patients with mental disorders were longer on sick leave and had worse work ability prognosis than patients without mental disorders, although both groups reported similar work capacity demands. CONCLUSIONS: Psychological work demands - which are highly relevant for work ability prognosis and work adjustment processes - can be explored and differentiated in terms of psychological capacity demands.
The literature contains a sizable number of publications where weather types are used to decompose climate shifts or trends into contributions of frequency and mean of those types. They are all based on the product rule, that is, a transformation of a product of sums into a sum of products, the latter providing the decomposition. While there is nothing to argue about the transformation itself, its interpretation as a climate shift or trend decomposition is bound to fail. While the case of a climate shift may be viewed as an incomplete description of a more complex behaviour, trend decomposition indeed produces bogus trends, as demonstrated by a synthetic counterexample with well-defined trends in type frequency and mean. Consequently, decompositions based on that transformation, be it for climate shifts or trends, must not be used.
Microservice Architectures (MSA) structure applications as a collection of loosely coupled services that implement business capabilities. The key advantages of MSA include inherent support for continuous deployment of large complex applications, agility and enhanced productivity. However, studies indicate that most MSA are homogeneous, and introduce shared vulnerabilites, thus vulnerable to multi-step attacks, which are economics-of-scale incentives to attackers. In this paper, we address the issue of shared vulnerabilities in microservices with a novel solution based on the concept of Moving Target Defenses (MTD). Our mechanism works by performing risk analysis against microservices to detect and prioritize vulnerabilities. Thereafter, security risk-oriented software diversification is employed, guided by a defined diversification index. The diversification is performed at runtime, leveraging both model and template based automatic code generation techniques to automatically transform programming languages and container images of the microservices. Consequently, the microservices attack surfaces are altered thereby introducing uncertainty for attackers while reducing the attackability of the microservices. Our experiments demonstrate the efficiency of our solution, with an average success rate of over 70% attack surface randomization.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Earthquake localization is both a necessity within the field of seismology, and a prerequisite for further analysis such as source studies and hazard assessment. Traditional localization methods often rely on manually picked phases. We present an alternative approach using deep learning that once trained can predict hypocenter locations efficiently. In seismology, neural networks have typically been trained with either single-station records or based on features that have been extracted previously from the waveforms. We use three-component full-waveform records of multiple stations directly. This means no information is lost during preprocessing and preparation of the data does not require expert knowledge. The first convolutional layer of our deep convolutional neural network (CNN) becomes sensitive to features that characterize the waveforms it is trained on. We show that this layer can therefore additionally be used as an event detector. As a test case, we trained our CNN using more than 2000 earthquake swarm events from West Bohemia, recorded by nine local three-component stations. The CNN successfully located 908 validation events with standard deviations of 56.4 m in east-west, 123.8 m in north-south, and 136.3 m in vertical direction compared to a double-difference relocated reference catalog. The detector is sensitive to events with magnitudes down to M-L = -0.8 with 3.5% false positive detections.
A fast and sensitive method for the continuous determination of methane (CH4) and its stable carbon isotopic values (delta C-13-CH4) in surface waters was developed by applying a vacuum to a gas/liquid exchange membrane and measuring the extracted gases by a portable cavity ring-down spectroscopy analyser (M-CRDS). The M-CRDS was calibrated and characterized for CH4 concentration and delta C-13-CH4 with synthetic water standards. The detection limit of the M-CRDS for the simultaneous determination of CH4 and delta C-13-CH4 is 3.6 nmol L-1 CH4. A measurement precision of CH4 concentrations and delta C-13-CH4 in the range of 1.1%, respectively, 1.7 parts per thousand (1 sigma) and accuracy (1.3%, respectively, 0.8 parts per thousand [1 sigma]) was achieved for single measurements and averaging times of 10 min. The response time tau of 57 +/- 5 s allow determination of delta C-13-CH4 values more than twice as fast than other methods. The demonstrated M-CRDS method was applied and tested for Lake Stechlin (Germany) and compared with the headspace-gas chromatography and fast membrane CH4 concentration methods. Maximum CH4 concentrations (577 nmol L-1) and lightest delta C-13-CH4 (-35.2 parts per thousand) were found around the thermocline in depth profile measurements. The M-CRDS-method was in good agreement with other methods. Temporal variations in CH4 concentration and delta C-13-CH4 obtained in 24 h measurements indicate either local methane production/oxidation or physical variations in the thermocline. Therefore, these results illustrate the need of fast and sensitive analyses to achieve a better understanding of different mechanisms and pathways of CH4 formation in aquatic environments.
This work reviews the literature on an alleged global warming 'pause' in global mean surface temperature (GMST) to determine how it has been defined, what time intervals are used to characterise it, what data are used to measure it, and what methods used to assess it. We test for 'pauses', both in the normally understood meaning of the term to mean no warming trend, as well as for a 'pause' defined as a substantially slower trend in GMST. The tests are carried out with the historical versions of GMST that existed for each pause-interval tested, and with current versions of each of the GMST datasets. The tests are conducted following the common (but questionable) practice of breaking the linear fit at the start of the trend interval ('broken' trends), and also with trends that are continuous with the data bordering the trend interval. We also compare results when appropriate allowance is made for the selection bias problem. The results show that there is little or no statistical evidence for a lack of trend or slower trend in GMST using either the historical data or the current data. The perception that there was a 'pause' in GMST was bolstered by earlier biases in the data in combination with incomplete statistical testing.
This work reviews the literature on an alleged global warming 'pause' in global mean surface temperature (GMST) to determine how it has been defined, what time intervals are used to characterise it, what data are used to measure it, and what methods used to assess it. We test for 'pauses', both in the normally understood meaning of the term to mean no warming trend, as well as for a 'pause' defined as a substantially slower trend in GMST. The tests are carried out with the historical versions of GMST that existed for each pause-interval tested, and with current versions of each of the GMST datasets. The tests are conducted following the common (but questionable) practice of breaking the linear fit at the start of the trend interval ('broken' trends), and also with trends that are continuous with the data bordering the trend interval. We also compare results when appropriate allowance is made for the selection bias problem. The results show that there is little or no statistical evidence for a lack of trend or slower trend in GMST using either the historical data or the current data. The perception that there was a 'pause' in GMST was bolstered by earlier biases in the data in combination with incomplete statistical testing.
We present a catalogue of white dwarf candidates selected from the second data release of Gaia (DR2). We used a sample of spectroscopically confirmed white dwarfs from the Sloan Digital Sky Survey (SDSS) to map the entire space spanned by these objects in the Gaia Hertzsprung–Russell diagram. We then defined a set of cuts in absolute magnitude, colour, and a number of Gaia quality flags to remove the majority of contaminating objects. Finally, we adopt a method analogous to the one presented in our earlier SDSS photometric catalogues to calculate a probability of being a white dwarf (PWD) for all Gaia sources that passed the initial selection. The final catalogue is composed of 486641 stars with calculated PWD from which it is possible to select a sample of ≃260000 high-confidence white dwarf candidates in the magnitude range 8 < G < 21. By comparing this catalogue with a sample of SDSS white dwarf candidates, we estimate an upper limit in completeness of 85 per cent for white dwarfs with G ≤ 20 mag and Teff >7000 K, at high Galactic latitudes (|b| > 20°). However, the completeness drops at low Galactic latitudes, and the magnitude limit of the catalogue varies significantly across the sky as a function of Gaia’s scanning law. We also provide the list of objects within our sample with available SDSS spectroscopy. We use this spectroscopic sample to characterize the observed structure of the white dwarf distribution in the H–R diagram.
The article examines the work of Rabbi Yitzhak Isaac Halevy, arguably the most significant Orthodox response to the Wissenschaft des Judentums school of historiography. Halevy himself exemplified the Orthodox struggle against Wissenschaft, yet his work expressed a commitment to modern historiographical discipline that suggested an internalization of some of the very same premises adopted by Wissenschaft. While criticizing the representatives of Wissenschaft, Halevy was, at the same time, fighting for the internalization of its innovative characteristics into Orthodox society. He saw himself as a leader of a movement working towards the development of Orthodox Jewish studies and his application of modern historiographic principles from an Orthodox worldview as creating critical Orthodox historiography. Halevy’s approach promotes an understanding of Orthodoxy as a complex phenomenon, of which the struggle against modern secularization is just one of many characteristics.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
Low back pain (LBP) is a leading cause of activity limitation. Objective assessment of the spinal motion plays a key role in diagnosis and treatment of LBP. We propose a method that facilitates clinical assessment of lower back motions by means of a wireless inertial sensor network. The sensor units are attached to the right and left side of the lumbar region, the pelvis and the thighs, respectively. Since magnetometers are known to be unreliable in indoor environments, we use only 3D accelerometer and 3D gyroscope readings. Compensation of integration drift in the horizontal plane is achieved by estimating the gyroscope biases from automatically detected initial rest phases. For the estimation of sensor orientations, both a smoothing algorithm and a filtering algorithm are presented. From these orientations, we determine three-dimensional joint angles between the thighs and the pelvis and between the pelvis and the lumbar region. We compare the orientations and joint angles to measurements of an optical motion tracking system that tracks each skin-mounted sensor by means of reflective markers. Eight subjects perform a neutral initial pose, then flexion/extension, lateral flexion, and rotation of the trunk. The root mean square deviation between inertial and optical angles is about one degree for angles in the frontal and sagittal plane and about two degrees for angles in the transverse plane (both values averaged over all trials). We choose five features that characterize the initial pose and the three motions. Interindividual differences of all features are found to be clearly larger than the observed measurement deviations. These results indicate that the proposed inertial sensor-based method is a promising tool for lower back motion assessment.
In order to provide best control of the regeneration process for each individual patient, the release of protein drugs administered during surgery may need to be timely adapted and/or delayed according to the progress of healing/regeneration. This study aims to establish a multifunctional implant system for a local on-demand release, which is applicable for various types of proteins. It was hypothesized that a tubular multimaterial container kit, which hosts the protein of interest as a solution or gel formulation, would enable on-demand release if equipped with the capacity of diameter reduction upon external stimulation. Using devices from poly(epsilon-caprolactone) networks, it could be demonstrated that a shape-memory effect activated by heat or NIR light enabled on-demand tube shrinkage. The decrease of diameter of these shape-memory tubes (SMT) allowed expelling the payload as demonstrated for several proteins including SDF-1 alpha, a therapeutically relevant chemotactic protein, to achieve e.g. continuous release with a triggered add-on dosing (open tube) or an on-demand onset of bolus or sustained release (sealed tube). Considering the clinical relevance of protein factors in (stem) cell attraction to lesions and the progress in monitoring biomarkers in body fluids, such on-demand release systems may be further explored e.g. in heart, nerve, or bone regeneration in the future.
SXP 1062 is a Be X-ray binary (BeXB) located in the Small Magellanic Cloud. It hosts a long-period X-ray pulsar and is likely associated with the supernova remnant MCSNR J0127−7332. In this work we present a multiwavelength view on SXP 1062 in different luminosity regimes. We consider monitoring campaigns in optical (OGLE survey) and X-ray (Swift telescope). During these campaigns a tight coincidence of X-ray and optical outbursts is observed. We interpret this as typical Type I outbursts as often detected in BeXBs at periastron passage of the neutron star (NS). To study different X-ray luminosity regimes in depth, during the source quiescence we observed it with XMM–Newton while Chandra observations followed an X-ray outburst. Nearly simultaneously with Chandra observations in X-rays, in optical the RSS/SALT telescope obtained spectra of SXP 1062. On the basis of our multiwavelength campaign we propose a simple scenario where the disc of the Be star is observed face-on, while the orbit of the NS is inclined with respect to the disc. According to the model of quasi-spherical settling accretion our estimation of the magnetic field of the pulsar in SXP 1062 does not require an extremely strong magnetic field at the present time.
With increasing amount of strong motion data, Ground Motion Prediction Equation (GMPE) developers are able to quantify empirical site amplification functions (delta S2S(s)) from GMPE residuals, for use in site-specific Probabilistic Seismic Hazard Assessment. In this study, we first derive a GMPE for 5% damped Pseudo Spectral Acceleration (g) of Active Shallow Crustal earthquakes in Japan with 3.4 <= M-w <= 7.3 and 0 <= R-JB <= 600km. Using k-mean spectral clustering technique, we then classify our estimated delta S2S(s)(T = 0.01 - 2s) of 588 wellcharacterized sites, into 8 site clusters with distinct mean site amplification functions, and within-cluster site-tosite variability similar to 50% smaller than the overall dataset variability (phi(S2S)). Following an evaluation of existing schemes, we propose a revised data-driven site classification characterized by kernel density distributions of V-s30, V-s10, H-800, and predominant period (T-G) of the site clusters.
We establish essential steps of an iterative approach to operator algebras, ellipticity and Fredholm property on stratified spaces with singularities of second order. We cover, in particular, corner-degenerate differential operators. Our constructions are focused on the case where no additional conditions of trace and potential type are posed, but this case works well and will be considered in a forthcoming paper as a conclusion of the present calculus.
Quantitative estimates of sea-level rise in the Mediterranean Basin become increasingly accurate thanks to detailed satellite monitoring. However, such measuring campaigns cover several years to decades, while longer-term sea-level records are rare for the Mediterranean. We used a data archeological approach to reanalyze monthly mean sea-level data of the Antalya-I (1935–1977) tide gauge to fill this gap. We checked the accuracy and reliability of these data before merging them with the more recent records of the Antalya-II (1985–2009) tide gauge, accounting for an eight-year hiatus. We obtain a composite time series of monthly and annual mean sea levels spanning some 75 years, providing the longest record for the eastern Mediterranean Basin, and thus an essential tool for studying the region's recent sea-level trends. We estimate a relative mean sea-level rise of 2.2 ± 0.5 mm/year between 1935 and 2008, with an annual variability (expressed here as the standard deviation of the residuals, σresiduals = 41.4 mm) above that at the closest tide gauges (e.g., Thessaloniki, Greece, σresiduals = 29.0 mm). Relative sea-level rise accelerated to 6.0 ± 1.5 mm/year at Antalya-II; we attribute roughly half of this rate (~3.6 mm/year) to tectonic crustal motion and anthropogenic land subsidence. Our study highlights the value of data archeology for recovering and integrating historic tide gauge data for long-term sea-level and climate studies.
A New Kind of Jew
(2018)
The article examines Allen Ginsberg’s spiritual path, and places his interest in Asian religions within larger cultural agendas and life choices. While identifying as a Jew, Ginsberg wished to transcend beyond his parents’ orbit and actively sought to create an inclusive, tolerant, and permissive society where persons such as himself could live and create at ease. He chose elements from the Christian, Jewish, Native-American, Hindu, and Buddhist traditions, weaving them together into an ever-growing cultural and spiritual quilt. The poet never underwent a conversion experience or restricted his choices and freedoms. In Ginsberg’s understanding, Buddhism was a universal, non-theistic religion that meshed well with an individualist outlook, and worked toward personal solace and mindfulness. He and other Jews saw no contradiction between enchantment with Buddhism and their Jewish identity.
Sulfur is an important component in volcanic gases at the Earth surface but also present in the deep Earth in hydrothermal or magmatic fluids. Little is known about the evolution of such fluids during ascent in the crust. A new optical cell was developed for in situ Raman spectroscopic investigations on fluids allowing abrupt or continuous changes of pressure up to 200 MPa at temperatures up to 750 degrees C. The concept is based on a flexible gold bellow, which separates the sample fluid from the pressure medium water. To avoid reactions between aggressive fluids and the pressure cell, steel components in contact with the fluid are shielded by gold foil. The cell was tested to study redox reactions in fluids using aqueous ammonium sulfate solutions as a model system. During heating at constant pressure of 130 MPa, sulfate ions transform first to HSO4- ions and then to molecular units such as H2SO4. Variation of pressure shows that the stability of sulfate species relies on fluid density, i.e., highly charged species are stable only in high-density fluids. Partial decomposition of ammonium was evident above 550 degrees C by the occurrence of a nitrogen peak in the Raman spectra. Reduced sulfur species were observed above 700 degrees C by Raman signals near 2590 cm(-1) assigned to HS- and H2S. No clear evidence for the formation of sulfur dioxide was found in contrary to previous studies on aqueous H2SO4, suggesting very reducing conditions in our experiments. Fluid-mineral interaction was studied by inserting into the cell a small, semi-open capsule filled with a mixture of pyrite and pyrrhotite. Oxidation of the sample assembly was evident by transformation of pyrite to pyrrhotite. As a consequence, sulfide species were observed in the fluid already at temperatures of similar to 600 degrees C.
A new political system model
(2018)
Semi-parliamentary government is a distinct executive-legislative system that mirrors semi-presidentialism. It exists when the legislature is divided into two equally legitimate parts, only one of which can dismiss the prime minister in a no-confidence vote. This system has distinct advantages over pure parliamentary and presidential systems: it establishes a branch-based separation of powers and can balance the ‘majoritarian’ and ‘proportional’ visions of democracy without concentrating executive power in a single individual. This article analyses bicameral versions of semi-parliamentary government in Australia and Japan, and compares empirical patterns of democracy in the Australian Commonwealth as well as New South Wales to 20 advanced parliamentary and semi-presidential systems. It discusses new semi-parliamentary designs, some of which do not require formal bicameralism, and pays special attention to semi-parliamentary options for democratising the European Union.
Catanionic vesicles spontaneously formed by mixing the anionic surfactant bis(2-ethylhexyl)sulfosuccinate sodium salt with the cationic surfactant cetyltrimethylammonium bromide were used as a reducing medium to produce gold clusters, which are embedded and well-ordered into the template phase. The gold clusters can be used as seeds in the growth process that follows by adding ascorbic acid as a mild reducing component. When the ascorbic acid was added very slowly in an ice bath round-edged gold nanoflowers were produced. When the same experiments were performed at room temperature in the presence of Ag+ ions, sharp-edged nanoflowers could be synthesized. The mechanism of nanoparticle formation can be understood to be a non-diffusion-limited Ostwald ripening process of preordered gold nanoparticles embedded in catanionic vesicle fragments. Surface-enhanced Raman scattering experiments show an excellent enhancement factor of 1.7 . 10(5) for the nanoflowers deposited on a silicon wafer.
Nitraria is a halophytic taxon (i.e., adapted to saline environments) that belongs to the plant family Nitrariaceae and is distributed from the Mediterranean, across Asia into the south-eastern tip of Australia. This taxon is thought to have originated in Asia during the Paleogene (66-23 Ma), alongside the proto-Paratethys epicontinental sea. The evolutionary history of Nitraria might hold important clues on the links between climatic and biotic evolution but limited taxonomic documentation of this taxon has thus far hindered this line of research. Here we investigate if the pollen morphology and the chemical composition of the pollen wall are informative of the evolutionary history of Nitraria and could explain if origination along the proto-Paratethys and dispersal to the Tibetan Plateau was simultaneous or a secondary process. To answer these questions, we applied a novel approach consisting of a combination of Fourier Transform Infrared spectroscopy (FTIR), to determine the chemical composition of the pollen wall, and pollen morphological analyses using Light Microscopy (LM) and Scanning Electron Microscopy (SEM). We analysed our data using ordinations (principal components analysis and non-metric multidimensional scaling), and directly mapped it on the Nitrariaceae phylogeny to produce a phylomorphospace and a phylochemospace. Our LM, SEM and FTIR analyses show clear morphological and chemical differences between the sister groups Peganum and Nitraria. Differences in the morphological and chemical characteristics of highland species (Nitraria schoberi, N. sphaerocarpa, N. sibirica and N. tangutorum) and lowland species (Nitraria billardierei and N. retusa) are very subtle, with phylogenetic history appearing to be a more important control on Nitraria pollen than local environmental conditions. Our approach shows a compelling consistency between the chemical and morphological characteristics of the eight studied Nitrariaceae species, and these traits are in agreement with the phylogenetic tree. Taken together, this demonstrates how novel methods for studying fossil pollen can facilitate the evolutionary investigation of living and extinct taxa, and the environments they represent.
The reaction between propargyl ethers of hydroxybenzaldehydes and the ylide ethyl (triphenylphosphoranylidene)acetate was carried out under microwave irradiation to regioselectively afford angular pyranocoumarins. The chromene and coumarin heterocyclic scaffolds were simultaneously formed in the same synthetic step without changing the reaction conditions. The natural products seselin, braylin, and dipetalolactone were among the products synthesized by this method.
This study examined a theoretical model hypothesizing that reading strategies mediate the effects of intrinsic reading motivation, reading fluency, and vocabulary knowledge on reading comprehension. Using path analytic methods, we tested the direct and indirect effects specified in the hypothesized model in a sample of 1105 fifth-graders. In addition to standardized tests and questionnaires, we administered a performance test to assess students' proficiency in the application of three reading strategies. The overall fit of the model to the data was good. Both cognitive (fluency and vocabulary) and motivational (intrinsic reading motivation) variables had an indirect effect on reading comprehension through their influence on reading strategies. Reading strategies had a unique effect on reading comprehension and partially mediated the effects that cognitive and motivational variables had on fifth-graders' reading achievements.
All outer planets in the Solar System are surrounded by a ring system. Many of these rings are dust rings or they contain at least a high proportion of dust. They are often formed by impacts of micro-meteoroids onto embedded bodies. The ejected material typically consists of micron-sized charged particles, which are susceptible to gravitational and non-gravitational forces. Generally, detailed information on the dynamics and distribution of the dust requires expensive numerical simulations of a large number of particles. Here we develop a relatively simple and fast, semi-analytical model for an impact-generated planetary dust ring governed by the planet’s gravity and the relevant perturbation forces for the dynamics of small charged particles. The most important parameter of the model is the dust production rate, which is a linear factor in the calculation of the dust densities. We apply our model to dust ejected from the Galilean satellites using production rates obtained from flybys of the dust sources. The dust densities predicted by our model are in good agreement with numerical simulations and with in situ measurements by the Galileo spacecraft. The lifetimes of large particles are about two orders of magnitude greater than those of small ones, which implies a flattening of the size distribution in circumplanetary space. Information about the distribution of circumplanetary dust is also important for the risk assessment of spacecraft orbits in the respective regions.
A prelude to total war?
(2018)
The conflict between Italy and Ethiopia in 1935–36 has been framed as a prelude to the Second World War and as a watershed towards ‘Total War’. One perspective has so far been neglected: the assessments of foreign military observers. This article examines American, British, German, and Austrian views on the operations and thereby also analyses the mindset of European officers at the time. The core argument emerging from these reports is that the war was perceived as a rather ‘normal’ colonial conflict. Neither the use of gas, nor the employment of aircraft against civilians was seen as a taboo or created significant outrage among the military observers. Instead, they lauded the Italians’ steady logistical efforts and employment of artillery and airpower to overcome nature and the enemy’s resistance.
Python is at the forefront of scientific computation for seismologists and therefore should be introduced to students interested in becoming seismologists. On its own, Python is open source and well designed with extensive libraries. However, Python code can also be executed, visualized, and communicated to others with "Jupyter Notebooks". Thus, Jupyter Notebooks are ideal for teaching students Python and scientific computation. In this article, we designed an openly available Python library and collection of Jupyter Notebooks based on defined scientific computation learning goals for seismology students. The Notebooks cover topics from an introduction to Python to organizing data, earthquake catalog statistics, linear regression, and making maps. Our Python library and collection of Jupyter Notebooks are meant to be used as course materials for an upper-division data analysis course in an Earth Science Department, and the materials were tested in a Probabilistic Seismic Hazard course. However, seismologists or anyone else who is interested in Python for data analysis and map making can use these materials.
The tropical peat swamp forests of South-East Asia are being rapidly converted to agricultural plantations of oil palm and Acacia creating a significant global “hot-spot” for CO2 emissions. However, the effect of this major perturbation has yet to be quantified in terms of global warming potential (GWP) and the Earth's radiative budget. We used a GWP analysis and an impulse-response model of radiative forcing to quantify the climate forcing of this shift from a long-term carbon sink to a net source of greenhouse gases (CO2 and CH4). In the GWP analysis, five tropical peatlands were sinks in terms of their CO2 equivalent fluxes while they remained undisturbed. However, their drainage and conversion to oil palm and Acacia plantations produced a dramatic shift to very strong net CO2-equivalent sources. The induced losses of peat carbon are ~20× greater than the natural CO2 sequestration rates. In contrast, a radiative forcing model indicates that the magnitude of this shift from a net cooling to warming effect is ultimately related to the size of an individual peatland's carbon pool. The continuous accumulation of carbon in pristine tropical peatlands produced a progressively negative radiative forcing (i.e., cooling) that ranged from −2.1 to −6.7 nW/m2 per hectare peatland by 2010 CE, referenced to zero at the time of peat initiation. Peatland conversion to plantations leads to an immediate shift from negative to positive trend in radiative forcing (i.e., warming). If drainage persists, peak warming ranges from +3.3 to +8.7 nW/m2 per hectare of drained peatland. More importantly, this net warming impact on the Earth's radiation budget will persist for centuries to millennia after all the peat has been oxidized to CO2. This previously unreported and undesirable impact on the Earth's radiative balance provides a scientific rationale for conserving tropical peatlands in their pristine state.
This work presents two molecular fluorescent probes 1 and 2 for the selective determination of physiologically relevant K+ levels in water based on a highly K+/Na+ selective building block, the o-(2-methoxyethoxy)phenylaza-18-crown-6 lariat ether unit. Fluorescent probe 1 showed a high K+-induced fluorescence enhancement (FE) by a factor of 7.7 of the anthracenic emission and a dissociation constant (K-d) value of 38mm in water. Further, for 2+K+, we observed a dual emission behavior at 405 and 505nm. K+ increases the fluorescence intensity of 2 at 405nm by a factor of approximately 4.6 and K+ decreases the fluorescence intensity at 505nm by a factor of about 4.8. Fluorescent probe 2+K+ exhibited a K-d value of approximately 8mm in Na+-free solutions and in combined K+/Na+ solution a similar K-d value of about 9mm was found, reflecting the high K+/Na+ selectivity of 2 in water. Therefore, 2 is a promising fluorescent tool to measure ratiometrically and selectively physiologically relevant K+ levels.
Findings - The results provide (longitudinal) support for the proposed evaluative approach. They reveal new evidence that building brand equity is a means to mitigate negative effects, and indicate that negative spillover effects within a high-equity brand portfolio are unlikely. Finally, this research identifies situations in which developing a new brand might be more beneficial than leveraging an existing brand. Practical implications - This research has significant implications for firms with high-equity brands that might be affected by a scandal. The findings support managers to navigate their brands through a crisis.
Pneumonia is one of the most common and potentially lethal infectious conditions worldwide. Streptococcus pneumoniae is the pathogen most frequently associated with bacterial community-acquired pneumonia, while Legionella pneumophila is the major cause for local outbreaks of legionellosis. Both pathogens can be difficult to diagnose since signs and symptoms are nonspecific and do not differ from other causes of pneumonia. Therefore, a rapid diagnosis within a clinically relevant time is essential for a fast onset of the proper treatment. Although methods based on polymerase chain reaction significantly improved the identification of pathogens, they are difficult to conduct and need specialized equipment. We describe a rapid and sensitive test using isothermal recombinase polymerase amplification and detection on a disposable test strip. This method does not require any special instrumentation and can be performed in less than 20 min. The analytical sensitivity in the multiplex assay amplifying specific regions of S. pneumoniae and L. pneumophila simultaneously was 10 CFUs of genomic DNA per reaction. In cross detection studies with closely related strains and other bacterial agents the specificity of the RPA was confirmed. The presented method is applicable for near patient and field testing with a rather simple routine and the possibility for a read out with the naked eye.
Resource constrained smart micro-grid architectures describe a class of smart micro-grid architectures that handle communications operations over a lossy network and depend on a distributed collection of power generation and storage units. Disadvantaged communities with no or intermittent access to national power networks can benefit from such a micro-grid model by using low cost communication devices to coordinate the power generation, consumption, and storage. Furthermore, this solution is both cost-effective and environmentally-friendly. One model for such micro-grids, is for users to agree to coordinate a power sharing scheme in which individual generator owners sell excess unused power to users wanting access to power. Since the micro-grid relies on distributed renewable energy generation sources which are variable and only partly predictable, coordinating micro-grid operations with distributed algorithms is necessity for grid stability. Grid stability is crucial in retaining user trust in the dependability of the micro-grid, and user participation in the power sharing scheme, because user withdrawals can cause the grid to breakdown which is undesirable. In this chapter, we present a distributed architecture for fair power distribution and billing on microgrids. The architecture is designed to operate efficiently over a lossy communication network, which is an advantage for disadvantaged communities. We build on the architecture to discuss grid coordination notably how tasks such as metering, power resource allocation, forecasting, and scheduling can be handled. All four tasks are managed by a feedback control loop that monitors the performance and behaviour of the micro-grid, and based on historical data makes decisions to ensure the smooth operation of the grid. Finally, since lossy networks are undependable, differentiating system failures from adversarial manipulations is an important consideration for grid stability. We therefore provide a characterisation of potential adversarial models and discuss possible mitigation measures.
Like almost all fields of science, hydrology has benefited to a large extent from the tremendous improvements in scientific instruments that are able to collect long-time data series and an increase in available computational power and storage capabilities over the last decades. Many model applications and statistical analyses (e.g., extreme value analysis) are based on these time series. Consequently, the quality and the completeness of these time series are essential. Preprocessing of raw data sets by filling data gaps is thus a necessary procedure. Several interpolation techniques with different complexity are available ranging from rather simple to extremely challenging approaches. In this paper, various imputation methods available to the hydrological researchers are reviewed with regard to their suitability for filling gaps in the context of solving hydrological questions. The methodological approaches include arithmetic mean imputation, principal component analysis, regression-based methods and multiple imputation methods. In particular, autoregressive conditional heteroscedasticity (ARCH) models which originate from finance and econometrics will be discussed regarding their applicability to data series characterized by non-constant volatility and heteroscedasticity in hydrological contexts. The review shows that methodological advances driven by other fields of research bear relevance for a more intensive use of these methods in hydrology. Up to now, the hydrological community has paid little attention to the imputation ability of time series models in general and ARCH models in particular.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
A search for new supernova remnants (SNRs) has been conducted using TeV gamma-ray data from the H.E.S.S. Galactic plane survey. As an identification criterion, shell morphologies that are characteristic for known resolved TeV SNRs have been used. Three new SNR candidates were identified in the H.E.S.S. data set with this method. Extensive multiwavelength searches for counterparts were conducted. A radio SNR candidate has been identified to be a counterpart to HESS J1534-571. The TeV source is therefore classified as a SNR. For the other two sources, HESS J1614-518 and HESS J1912 + 101, no identifying counterparts have been found, thus they remain SNR candidates for the time being. TeV-emitting SNRs are key objects in the context of identifying the accelerators of Galactic cosmic rays. The TeV emission of the relativistic particles in the new sources is examined in view of possible leptonic and hadronic emission scenarios, taking the current multiwavelength knowledge into account.
Context. Microquasars are potential gamma-ray emitters. Indications of transient episodes of gamma-ray emission were recently reported in at least two systems: Cyg X-1 and Cyg X-3. The identification of additional gamma-ray-emitting microquasars is required to better understand how gamma-ray emission can be produced in these systems. Aims. Theoretical models have predicted very high-energy (VHE) gamma-ray emission from microquasars during periods of transient outburst. Observations reported herein were undertaken with the objective of observing a broadband flaring event in the gamma-ray and X-ray bands. Methods. Contemporaneous observations of three microquasars, GRS 1915+105, Circinus X-1, and V4641 Sgr, were obtained using the High Energy Spectroscopic System (H.E.S.S.) telescope array and the Rossi X-ray Timing Explorer (RXTE) satellite. X-ray analyses for each microquasar were performed and VHE gamma-ray upper limits from contemporaneous H.E.S.S. observations were derived. Results. No significant gamma-ray signal has been detected in any of the three systems. The integral gamma-ray photon flux at the observational epochs is constrained to be I(>560 GeV) < 7.3 x 10(-13) cm(-2) S-1, I(>560 GeV) < 1.2 x 10-(12) cm s(-1), and I(>240 GeV) < 4.5 x 10(-12) cm(-2) s(-1) for GRS 1915+105, Circinus X-1, and V4641 Sgr, respectively. Conclusions. The gamma-ray upper limits obtained using H.E.S.S. are examined in the context of previous Cherenkov telescope observations of microquasars. The effect of intrinsic absorption is modelled for each target and found to have negligible impact on the flux of escaping gamma-rays. When combined with the X-ray behaviour observed using RXTE, the derived results indicate that if detectable VHE gamma-ray emission from microquasars is commonplace, then it is likely to be highly transient.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
A Shockley-Type polymer
(2018)
Charge extraction rate in solar cells made of blends of electron donating/accepting organic semiconductors is typically slow due to their low charge carrier mobility. This sets a limit on the active layer thickness and has hindered the industrialization of organic solar cells (OSCs). Herein, charge transport and recombination properties of an efficient polymer (NT812):fullerene blend are investigated. This system delivers power conversion efficiency of >9% even when the junction thickness is as large as 800 nm. Experimental results indicate that this material system exhibits exceptionally low bimolecular recombination constant, 800 times smaller than the diffusion-controlled electron and hole encounter rate. Comparing theoretical results based on a recently introduced modified Shockley model for fill factor, and experiments, clarifies that charge collection is nearly ideal in these solar cells even when the thickness is several hundreds of nanometer. This is the first realization of high-efficiency Shockley-type organic solar cells with junction thicknesses suitable for scaling up.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
We uniquely introduce convex production costs into a cartel model involving spatial price discrimination. We demonstrate that greater convexity improves cartel stability and that for sufficient convexity first best locations will be adopted. We show that allowing locations to vary over the game reduces cartel stability but that greater convexity continues to improve that stability. Moreover, when the degree of convexity does not support the first best collusive locations, other collusive locations exist that require less stability and these may either increase or decrease social welfare relative to competition. Critically, these locations that require less stability are more dispersed in sharp contrast to the known result assuming linear production costs.
During the last years Urban Green Infrastructure (UGI) has evolved as a research focus across Europe. UGI can be understood as a multifunctional network of different urban green spaces and elements contributing to urban benefits. Urban agriculture has gained increasing research interest in this context. While a strong focus has been made on functions and benefits of small scale activities, the question is still open, whether these findings can be up-scaled and transferred to the farmland scale. Furthermore, multifunctionality of urban and peri-urban agriculture is rarely being considered in the landscape context. This research aims to address these gaps and harnesses the question if agricultural landscapes – which in many European metropolitan regions provide significant spatial potential – can contribute to UGI as multifunctional green spaces. This work considers multifunctionality qualitatively based on stakeholder opinion, using a participatory research approach. This study provides new insights in peri-urban farmland potentials for UGI development, resulting into a strategy framework. Furthermore, it reflects on the role of the stakeholder involvement for `multifunctionality planning´. It suggests that it helps to define meaningful bundles of intertwined functions that interact on different scales, helping to deal with non-linearity of multiple functions and to better manage them simultaneously.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
On 2015 March 23, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) responded to a Swift-Burst Alert Telescope (BAT) detection of a gamma-ray burst, with observations beginning 270 s after the onset of BAT emission, and only 135 s after the main BAT emission peak. No statistically significant signal is detected above 140 GeV. The VERITAS upper limit on the fluence in a 40-minute integration corresponds to about 1% of the prompt fluence. Our limit is particularly significant because the very-high-energy (VHE) observation started only similar to 2 minutes after the prompt emission peaked, and Fermi-Large Area Telescope observations of numerous other bursts have revealed that the high-energy emission is typically delayed relative to the prompt radiation and lasts significantly longer. Also, the proximity of GRB 150323A (z = 0.593) limits the attenuation by the extragalactic background light to similar to 50% at 100-200 GeV. We conclude that GRB 150323A had an intrinsically very weak high-energy afterglow, or that the GeV spectrum had a turnover below similar to 100 GeV. If the GRB exploded into the stellar wind of a massive progenitor, the VHE non-detection constrains the wind density parameter to be A greater than or similar to 3 x 10(11) g . cm(-1), consistent with a standard Wolf-Rayet progenitor. Alternatively, the VHE emission from the blast wave would be weak in a very tenuous medium such as the interstellar medium, which therefore cannot be ruled out as the environment of GRB 150323A.
A submerged pine forest from the early Holocene in the Mecklenburg Lake District, northern Germany
(2018)
For the first time, evidence of a submerged pine forest from the early Holocene can be documented in a central European lake. Subaquatic tree stumps were discovered in Lake Giesenschlagsee at a depth of between 2 and 5m using scuba divers, side-scan sonar and a remotely operated vehicle. Several erect stumps, anchored to the ground by roots, represent an insitu record of this former forest. Botanical determination revealed the stumps to be Scots pine (Pinus sylvestris) with an individual tree age of about 80years. The trees could not be dated by means of dendrochronology, as they are older than the regional reference chronology for pine. Radiocarbon ages from the wood range from 10880 +/- 210 to 10370 +/- 130cal. a BP, which is equivalent to the mid-Preboreal to early Boreal biozones. The trees are rooted in sedge peat, which can be dated to this period as well, using pollen stratigraphical analysis. Tilting of the peat bed by 4m indicates subsidence of the ground due to local dead ice melting, causing the trees to become submerged and preserved for millennia. Together with recently detected Lateglacial insitu tree occurrences in nearby lakes, the submerged pine forest at Giesenschlagsee represents a new and highly promising type of geo-bio-archive for the wider region. Comparable insitu pine remnants occur at some terrestrial (buried setting) and marine (submerged setting) sites in northern central Europe and beyond, but they partly differ in age. In general, the insitu pine finds document shifts of the zonal boreal forest ecosystem during the late Quaternary.
An area of increasing interest amongst teachers and researchers is the availability of tools for the design and implementation of literacy interventions with Spanish speaking children. The present systematic literature review contributes to this need by summarizing available findings on evidence-based literacy interventions (EBI) for children from first to third year of primary school. Our results are based on 20 EBI that aimed at improving at least one of the critical components mentioned by the NRP (2000): phonological awareness, phonics, fluency, vocabulary and comprehension. As 90% of the studies were completed with English-speaking children, we critically discussed the applicability of this evidence to the specific context of Spanish-speaking countries. Although many of the general characteristics of the EBI completed with English speaking children could also guide interventions in Spanish, it remains crucial to take into account structural differences between the orthographies of both languages. Moreover, we identified transversal strategies and implementation techniques that due to their universal character could also be useful for early literacy interventions in Spanish. (c) 2018 Fundacion Universitaria Konrad Lorenz. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/).
Introduction
To date, several meta-analyses clearly demonstrated that resistance and plyometric training are effective to improve physical fitness in children and adolescents. However, a methodological limitation of meta-analyses is that they synthesize results from different studies and hence ignore important differences across studies (i.e., mixing apples and oranges). Therefore, we aimed at examining comparative intervention studies that assessed the effects of age, sex, maturation, and resistance or plyometric training descriptors (e.g., training intensity, volume etc.) on measures of physical fitness while holding other variables constant.
Methods
To identify relevant studies, we systematically searched multiple electronic databases (e.g., PubMed) from inception to March 2018. We included resistance and plyometric training studies in healthy young athletes and non-athletes aged 6 to 18 years that investigated the effects of moderator variables (e.g., age, maturity, sex, etc.) on components of physical fitness (i.e., muscle strength and power).
Results
Our systematic literature search revealed a total of 75 eligible resistance and plyometric training studies, including 5,138 participants. Mean duration of resistance and plyometric training programs amounted to 8.9 ± 3.6 weeks and 7.1±1.4 weeks, respectively. Our findings showed that maturation affects plyometric and resistance training outcomes differently, with the former eliciting greater adaptations pre-peak height velocity (PHV) and the latter around- and post-PHV. Sex has no major impact on resistance training related outcomes (e.g., maximal strength, 10 repetition maximum). In terms of plyometric training, around-PHV boys appear to respond with larger performance improvements (e.g., jump height, jump distance) compared with girls. Different types of resistance training (e.g., body weight, free weights) are effective in improving measures of muscle strength (e.g., maximum voluntary contraction) in untrained children and adolescents. Effects of plyometric training in untrained youth primarily follow the principle of training specificity. Despite the fact that only 6 out of 75 comparative studies investigated resistance or plyometric training in trained individuals, positive effects were reported in all 6 studies (e.g., maximum strength and vertical jump height, respectively).
Conclusions
The present review article identified research gaps (e.g., training descriptors, modern alternative training modalities) that should be addressed in future comparative studies.
We present a prototype of an integrated reasoning environment for educational purposes. The presented tool is a fragment of a proof assistant and automated theorem prover. We describe the existing and planned functionality of the theorem prover and especially the functionality of the educational fragment. This currently supports working with terms of the untyped lambda calculus and addresses both undergraduate students and researchers. We show how the tool can be used to support the students' understanding of functional programming and discuss general problems related to the process of building theorem proving software that aims at supporting both research and education.
The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. The computational results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure. Published by AIP Publishing.
We demonstrate a tilted pulse-front transient grating (TG) technique that allows to optimally utilize time resolution as well as TG line density while probing under grazing incidence as typically done in extreme ultraviolet (EUV) or soft x-ray (SXR) experiments. Our optical setup adapts the pulse front tilt of the two pulses that create the TG to the grazing incident pulse. We demonstrate the technique using all 800 nm femtosecond laser pulses for TG generation on a vanadium dioxide film. We probe that grating via diffraction of a third 800 nm pulse. The time resolution of 90 fs is an improvement by a factor of 30 compared to our previous experiments on the same system. The scheme paves the way for EUV and SXR probing of optically induced TGs on any material.
High-throughput RNA sequencing (RNAseq) produces large data sets containing expression levels of thousands of genes. The analysis of RNAseq data leads to a better understanding of gene functions and interactions, which eventually helps to study diseases like cancer and develop effective treatments. Large-scale RNAseq expression studies on cancer comprise samples from multiple cancer types and aim to identify their distinct molecular characteristics. Analyzing samples from different cancer types implies analyzing samples from different tissue origin. Such multi-tissue RNAseq data sets require a meaningful analysis that accounts for the inherent tissue-related bias: The identified characteristics must not originate from the differences in tissue types, but from the actual differences in cancer types. However, current analysis procedures do not incorporate that aspect. As a result, we propose to integrate a tissue-awareness into the analysis of multi-tissue RNAseq data. We introduce an extension for gene selection that provides a tissue-wise context for every gene and can be flexibly combined with any existing gene selection approach. We suggest to expand conventional evaluation by additional metrics that are sensitive to the tissue-related bias. Evaluations show that especially low complexity gene selection approaches profit from introducing tissue-awareness.
In this study transmission X-ray microscopy (TXM) was tested as a method to investigate the chemistry and structure of corroded silicate glasses at the nanometer scale. Three different silicate glasses were altered in static corrosion experiments for 1-336 hours at temperatures between 60 degrees C and 85 degrees C using a 25% HCl solution. Thin lamellas were cut perpendicular to the surface of corroded glass monoliths and were analysed with conventional TEM as well as with TXM. By recording optical density profiles at photon energies around the Na and O K-edges, the shape of the corrosion rim/pristine glass interfaces and the thickness of the corrosion rims has been determined. Na and O near-edge X-ray absorption fine-structure spectra (NEXAFS) were obtained without inducing irradiation damage and have been used to detect chemical changes in the corrosion rims. Spatially resolved NEXAFS spectra at the O K-edge provided insight to structural changes in the corrosion layer on the atomic scale. By comparison to O K-edge spectra of silicate minerals and (hydrous) albite glass as well as to O K-edge NEXAFS of model structures simulated with ab initio calculations, evidence is provided that changes of the fine structure at the O K-edge are assigned to the formation of siloxane groups in the corrosion rim.
A transparent and data-driven global tectonic regionalization model for seismic hazard assessment
(2018)
A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.
The central European Bohemian Massif has undergone over two centuries of scientific investigation which has made it a pivotal area for the development and testing of modern geological theories. The discovery of melt inclusions in high-grade rocks, either crystallized as nanogranitoids or as glassy inclusions, prompted the re-evaluation of the area with an ‘inclusionist’ eye. Melt inclusions have been identified in a wide range of rocks, including felsic/perpotassic granulites, migmatites, eclogites and garnet clinopyroxenites, all the result of melting events albeit over a wide range of pressure/temperature conditions (800–1000°C/0.5–5 GPa). This contribution provides an overview of such inclusions and discusses the qualitative and quantitative constraints they provide for melting processes, and the nature of melts and fluids involved in these processes. In particular, data on trace-element signatures of melt inclusions trapped at mantle depths are presented and discussed. Moreover, experimental re-homogenization of nanogranitoids provided microstructural criteria allowing assessment of the conditions at which melt and host are mutually stable during melting. Overall this work aims to provide guidelines and suggestions for petrologists wishing to explore the fascinating field of melt inclusions in metamorphic terranes worldwide, based on the newest discoveries from the still-enigmatic Bohemian Massif.
The focus in this article, through a reading of the German-Australian newspaper Der Kosmopolit, is on the legacies of entangled imperial identities in the period of the nineteenth-century German Enlightenment. Attention is drawn to members of the liberal nationalist generation of 1848 who emigrated to the Australian colonies and became involved in intellectual activities there. The idea of entanglement is applied to the philosophical orientation of the German-language newspaper that this group formed, Der Kosmopolit, which was published between 1856 and 1957. Against simplistic notions that would view cosmopolitanism as the opposite of nationalism, it is argued that individuals like Gustav Droege and Carl Muecke deployed an entangled ‘cosmo-nationalism’ in ways that both advanced German nationalism and facilitated their own engagement with and investment in Australian colonial society.
The focus in this article, through a reading of the German-Australian
newspaper Der Kosmopolit, is on the legacies of entangled imperial
identities in the period of the nineteenth-century German
Enlightenment. Attention is drawn to members of the liberal
nationalist generation of 1848 who emigrated to the Australian
colonies and became involved in intellectual activities there. The
idea of entanglement is applied to the philosophical orientation
of the German-language newspaper that this group formed, Der
Kosmopolit, which was published between 1856 and 1957. Against
simplistic notions that would view cosmopolitanism as the
opposite of nationalism, it is argued that individuals like Gustav
Droege and Carl Muecke deployed an entangled ‘cosmo-
nationalism’ in ways that both advanced German nationalism and
facilitated their own engagement with and investment in
Australian colonial society.
We present results from deep observations toward the Cygnus region using 300 hr of very high energy (VHE)gamma-ray data taken with the VERITAS Cerenkov telescope array and over 7 yr of high-energy.-ray data taken with the Fermi satellite at an energy above 1 GeV. As the brightest region of diffuse gamma-ray emission in the northern sky, the Cygnus region provides a promising area to probe the origins of cosmic rays. We report the identification of a potential Fermi-LAT counterpart to VER J2031+415 (TeV J2032+4130) and resolve the extended VHE source VER J2019+368 into two source candidates (VER J2018+367* and VER J2020+368*) and characterize their energy spectra. The Fermi-LAT morphology of 3FGL J2021.0+4031e (the Gamma Cygni supernova remnant) was examined, and a region of enhanced emission coincident with VER J2019+407 was identified and jointly fit with the VERITAS data. By modeling 3FGL J2015.6+3709 as two sources, one located at the location of the pulsar wind nebula CTB 87 and one at the quasar QSO J2015+371, a continuous spectrum from 1 GeV to 10 TeV was extracted for VER J2016+371 (CTB 87). An additional 71 locations coincident with Fermi-LAT sources and other potential objects of interest were tested for VHE gamma-ray emission, with no emission detected and upper limits on the differential flux placed at an average of 2.3% of the Crab Nebula flux. We interpret these observations in a multiwavelength context and present the most detailed gamma-ray view of the region to date.
On 14 December 2017, the Assembly of States Parties of the Rome Statute decided to activate the International Criminal Court’s jurisdiction over the crime of aggression. In doing so, it however seems to have rescinded the Kampala amendment adopted in 2010, and in particular, the need for State Parties to eventually opt out from the Court’s aggression-related jurisdiction. This reversal, while being more in line with the Rome Statute than the Kampala amendment itself, raises new (and old) and challenging legal questions which are highlighted in this article.
A wolf in sheep’s clothing?
(2018)
Communal narcissists possess the unique belief in their capability to bring about freedom to the world, and so see themselves as “saints”. To examine if this communal self-view extends to the more automatic component of self-evaluation, that is, a person’s implicit self-view, the present study (N = 701) tested the extent to which communal narcissism was associated with explicit communal self-ratings and implicit associations between the self and communal attributes. The latent correlation between communal narcissism and explicit communal self-views was strongly positive, yet no such relationship emerged for implicit communal self-views. These findings support the notion that communal narcissism may represent an effort to gain favorable appraisals from others in the absence of a genuine communal self-view.
Noninvasive near-infrared (NIR) light responsive therapy is a promising cancer treatment modality; however, some inherent drawbacks of conventional phototherapy heavily restrict its application in clinic. Rather than producing heat or reactive oxygen species in conventional NIR treatment, here a multifunctional yolk-shell nanoplatform is proposed that is able to generate microbubbles to destruct cancer cells upon NIR laser irradiation. Besides, the therapeutic effect is highly improved through the coalition of small interfering RNA (siRNA), which is codelivered by the nanoplatform. In vitro experiments demonstrate that siRNA significantly inhibits expression of protective proteins and reduces the tolerance of cancer cells to bubble-induced environmental damage. In this way, higher cytotoxicity is achieved by utilizing the yolk-shell nanoparticles than treated with the same nanoparticles missing siRNA under NIR laser irradiation. After surface modification with polyethylene glycol and transferrin, the yolk-shell nanoparticles can target tumors selectively, as demonstrated from the photoacoustic and ultrasonic imaging in vivo. The yolk-shell nanoplatform shows outstanding tumor regression with minimal side effects under NIR laser irradiation. Therefore, the multifunctional nanoparticles that combining bubble-induced mechanical effect with RNA interference are expected to be an effective NIR light responsive oncotherapy.
Abrupt or gradual?
(2018)
We used a change point analysis on a late Pleistocene-Holocene lake-sediment record from the Chew Bahir basin in the southern Ethiopian Rift to determine the amplitude and duration of past climate transitions. The most dramatic changes occurred over 240 yr (from similar to 15,700 to 15,460 yr) during the onset of the African Humid Period (AHP), and over 990 yr (from similar to 4875 to 3885 yr) during its protracted termination. The AHP was interrupted by a distinct dry period coinciding with the high-latitude Younger Dryas stadial, which had an abrupt onset (less than similar to 100 yr) at similar to 13,260 yr and lasted until similar to 11,730 yr. Wet-dry-wet transitions prior to the AHP may reflect the high-latitude Dansgaard-Oeschger cycles, as indicated by cross-correlation of the potassium record with the NorthGRIP ice core record between similar to 45-20 ka. These findings may contribute to the debates regarding the amplitude, and duration and mechanisms of past climate transitions, and their possible influence on the development of early modern human cultures.
Identifying abrupt transitions is a key question in various disciplines. Existing transition detection methods, however, do not rigorously account for time series uncertainties, often neglecting them altogether or assuming them to be independent and qualitatively similar. Here, we introduce a novel approach suited to handle uncertainties by representing the time series as a time-ordered sequence of probability density functions. We show how to detect abrupt transitions in such a sequence using the community structure of networks representing probabilities of recurrence. Using our approach, we detect transitions in global stock indices related to well-known periods of politico-economic volatility. We further uncover transitions in the El Nino-Southern Oscillation which coincide with periods of phase locking with the Pacific Decadal Oscillation. Finally, we provide for the first time an 'uncertainty-aware' framework which validates the hypothesis that ice-rafting events in the North Atlantic during the Holocene were synchronous with a weakened Asian summer monsoon.
Identifying abrupt transitions is a key question in various disciplines. Existing transition detection methods, however, do not rigorously account for time series uncertainties, often neglecting them altogether or assuming them to be independent and qualitatively similar. Here, we introduce a novel approach suited to handle uncertainties by representing the time series as a time-ordered sequence of probability density functions. We show how to detect abrupt transitions in such a sequence using the community structure of networks representing probabilities of recurrence. Using our approach, we detect transitions in global stock indices related to well-known periods of politico-economic volatility. We further uncover transitions in the El Niño-Southern Oscillation which coincide with periods of phase locking with the Pacific Decadal Oscillation. Finally, we provide for the first time an ‘uncertainty-aware’ framework which validates the hypothesis that ice-rafting events in the North Atlantic during the Holocene were synchronous with a weakened Asian summer monsoon.
Allometric trophic network (ATN) models offer high flexibility and scalability while minimizing the number of parameters and have been successfully applied to investigate complex food web dynamics and their influence on food web diversity and stability. However, the realism of ATN model energetics has never been assessed in detail, despite their critical influence on dynamic biomass and production patterns. Here, we compare the energetics of the currently established original ATN model, considering only biomass-dependent basal respiration, to an extended ATN model version, considering both basal and assimilation-dependent activity respiration. The latter is crucial in particular for unicellular and invertebrate organisms which dominate the metabolism of pelagic and soil food webs. Based on metabolic scaling laws, we show that the extended ATN version reflects the energy transfer through a chain of four trophic levels of unicellular and invertebrate organisms more realistically than the original ATN version. Depending on the strength of top-down control, the original ATN model yields trophic transfer efficiencies up to 71% at either the third or the fourth trophic level, which considerably exceeds any realistic values. In contrast, the extended ATN version yields realistic trophic transfer efficiencies 30% at all trophic levels, in accordance with both physiological considerations and empirical evidence from pelagic systems. Our results imply that accounting for activity respiration is essential for consistently implementing the metabolic theory of ecology in ATN models and for improving their quantitative predictions, which makes them more powerful tools for investigating the dynamics of complex natural communities.
Background: Event-related potentials (ERPs) are increasingly used in cognitive science. With their high temporal resolution, they offer a unique window into cognitive processes and their time course. In this paper, we focus on ERP experiments whose designs involve selecting participants and stimuli amongst many. Recently, Westfall et al. (2017) highlighted the drastic consequences of not considering stimuli as a random variable in fMRI studies with such designs. Most ERP studies in cognitive psychology suffer from the same drawback. New method: We advocate the use of the Quasi-F or Mixed-effects models instead of the classical ANOVA/by-participant F1 statistic to analyze ERP datasets in which the dependent variable is reduced to one measure per trial (e.g., mean amplitude). We combine Quasi-F statistic and cluster mass tests to analyze datasets with multiple measures per trial. Doing so allows us to treat stimulus as a random variable while correcting for multiple comparisons. Results: Simulations show that the use of Quasi-F statistics with cluster mass tests allows maintaining the family wise error rates close to the nominal alpha level of 0.05. Comparison with existing methods: Simulations reveal that the classical ANOVA/F1 approach has an alarming FWER, demonstrating the superiority of models that treat both participant and stimulus as random variables, like the Quasi-F approach. Conclusions: Our simulations question the validity of studies in which stimulus is not treated as a random variable. Failure to change the current standards feeds the replicability crisis.
With the aim to improve the quality of public administration (PA) programmes in Europe, EGPA established in 1999—together with the Network of Institutes and Schools of Public Administration in Central and Eastern Europe (NISPAcee)—the European Association for Public Administration Accreditation (EAPAA). This chapter presents the development of EAPAA in the last two decades and the experiences made with voluntary accreditation of academic PA programmes in Europe. The authors illustrate the basic accreditation concept of EAPAA, its integration into the European quality assurance institutions and the scope of accreditation missions over time. Finally, the effects of accreditation measures in the educational field of PA are discussed.
Navigating between cultures in addition to developmental changes and challenges in early adolescence can be difficult. We investigated school, family, and ethnic group as conditions for acculturation and school adjustment among early-adolescent boys and girls. Analyses were based on 860 mostly second- and third-generation immigrant students from 71 countries (50% male; M-age = 11.59 years), attending German secondary schools. Perceived support for inclusion and integration in school and family were associated with a stronger orientation toward both cultures (integration) and better adjustment (e.g., higher school marks, more well-being). Perceived cultural distance and ethnic discrimination were associated with a stronger ethnic and weaker mainstream orientation (separation), and lower adjustment. Boys perceived contextual conditions more negatively, had a weaker mainstream orientation, and showed more behavioral problems but did not differ from girls in the associations between contextual conditions and acculturation and adjustment. Implications for research, policy, and practice are discussed.
Although aluminum chronic neurotoxicity is well documented, there are no well-established experimental protocols of Al exposure. In the current study, toxic effects of sub-chronic Al exposure have been evaluated in outbreed male rats (gastrointestinal administration). Forty animals were used: 10 were administered with AlCl3 water solution (2 mg/kg Al per day) for 1 month, 10 received the same concentration of AlCl3 for 3 month, and 20 (10 per observation period) saline as control. After 30 and 90 days, the animals underwent behavioral tests: open field, passive avoidance, extrapolation escape task, and grip strength. At the end of the study, the blood, liver, kidney, and brain were excised for analytical and morphological studies. The Al content was measured by inductively coupled plasma mass-spectrometry. Essential trace elements-Co, Cr, Cu, Fe, Mg, Mn, Mo, Se, and Zn-were measured in whole blood samples. Although no morphological changes were observed in the brain, liver, or kidney for both exposure terms, dose-dependent Al accumulation and behavioral differences (increased locomotor activity after 30 days) between treatment and control groups were indicated. Moreover, for 30 days exposure, strong positive correlation between Al content in the brain and blood for individual animals was established, which surprisingly disappeared by the third month. This may indicate neural barrier adaptation to the Al exposure or the saturation of Al transport into the brain. Notably, we could not see a clear neurodegeneration process after rather prolonged sub-chronic Al exposure, so probably longer exposure periods are required.
Digital terrain models (DTMs) are a fundamental source of information in Earth sciences. DTM-based studies, however, can contain remarkable biases if limitations and inaccuracies in these models are disregarded. In this work, four freely available datasets, including Shuttle Radar Topography Mission C-Band Synthetic Aperture Radar (SRTM C-SAR V3 DEM), Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Map (ASTER GDEM V2), and two nationwide airborne light detection and ranging (LiDAR)-derived DTMs (at 5-m and 1-m spatial resolution, respectively) were analysed in three geomorphologically contrasting, small (3–5 km2) catchments located in Mediterranean landscapes under intensive human influence (Mallorca Island, Spain). Vertical accuracy as well as the influence of each dataset’s characteristics on hydrological and geomorphological modelling applicability were assessed by using ground-truth data, classic geometric and morphometric parameters, and a recently proposed index of sediment connectivity. Overall vertical accuracy—expressed as the root mean squared error (RMSE) and normalised median deviation (NMAD)—revealed the highest accuracy for the 1-m (RMSE = 1.55 m; NMAD = 0.44 m) and 5-m LiDAR DTMs (RMSE = 1.73 m; NMAD = 0.84 m). Vertical accuracy of the SRTM data was lower (RMSE = 6.98 m; NMAD = 5.27 m), but considerably higher than for the ASTER data (RMSE = 16.10 m; NMAD = 11.23 m). All datasets were affected by systematic distortions. Propagation of these errors and coarse horizontal resolution caused negative impacts on flow routing, stream network, and catchment delineation, and to a lower extent, on the distribution of slope values. These limitations should be carefully considered when applying DTMs for catchment hydrogeomorphological modelling.
Accuracy of training recommendations based on a treadmill multistage incremental exercise test
(2018)
Competitive runners will occasionally undergo exercise in a laboratory setting to obtain predictive and prescriptive information regarding their performance. The present research aimed to assess whether the physiological demands of lab-based treadmill running (TM) can simulate that of over-ground (OG) running using a commonly used protocol. Fifteen healthy volunteers with a weekly mileage of ≥ 20 km over the past 6 months and treadmill experience participated in this cross-sectional study. Two stepwise incremental tests until volitional exhaustion was performed in a fixed order within one week in an Outpatient Clinic research laboratory and outdoor athletic track. Running velocity (IATspeed), heart rate (IATHR) and lactate concentration at the individual anaerobic threshold (IATbLa) were primary endpoints. Additionally, distance covered (DIST), maximal heart rate (HRmax), maximal blood lactate concentration (bLamax) and rate of perceived exertion (RPE) at IATspeed were analyzed. IATspeed, DIST and HRmax were not statistically significantly different between conditions, whereas bLamax and RPE at IATspeed showed statistical significance (p < 0.05). Apart from RPE at IATspeed, IATspeed, DIST, HRmax and bLamax strongly correlate between conditions (r = 0.815–0.988). High reliability between conditions provides strong evidence to suggest that running on a treadmill are physiologically comparable to that of OG and that training recommendations and be made with assurance.
Accuracy of training recommendations based on a treadmill multistage incremental exercise test
(2018)
Competitive runners will occasionally undergo exercise in a laboratory setting to obtain predictive and prescriptive information regarding their performance. The present research aimed to assess whether the physiological demands of lab-based treadmill running (TM) can simulate that of over-ground (OG) running using a commonly used protocol. Fifteen healthy volunteers with a weekly mileage of ≥ 20 km over the past 6 months and treadmill experience participated in this cross-sectional study. Two stepwise incremental tests until volitional exhaustion was performed in a fixed order within one week in an Outpatient Clinic research laboratory and outdoor athletic track. Running velocity (IATspeed), heart rate (IATHR) and lactate concentration at the individual anaerobic threshold (IATbLa) were primary endpoints. Additionally, distance covered (DIST), maximal heart rate (HRmax), maximal blood lactate concentration (bLamax) and rate of perceived exertion (RPE) at IATspeed were analyzed. IATspeed, DIST and HRmax were not statistically significantly different between conditions, whereas bLamax and RPE at IATspeed showed statistical significance (p < 0.05). Apart from RPE at IATspeed, IATspeed, DIST, HRmax and bLamax strongly correlate between conditions (r = 0.815–0.988). High reliability between conditions provides strong evidence to suggest that running on a treadmill are physiologically comparable to that of OG and that training recommendations and be made with assurance.
Action effects have been stated to be important for infants’ processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw’s action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
Activation of anthracene endoperoxides in leishmania and impairment of mitochondrial functions
(2018)
Leishmaniasis is a vector-borne disease caused by protozoal Leishmania. Because of resistance development against current drugs, new antileishmanial compounds are urgently needed. Endoperoxides (EPs) are successfully used in malaria therapy, and experimental evidence of their potential against leishmaniasis exists. Anthracene endoperoxides (AcEPs) have so far been only technically used and not explored for their leishmanicidal potential. This study verified the in vitro efficiency and mechanism of AcEPs against both Leishmania promastigotes and axenic amastigotes (L. tarentolae and L. donovani) as well as their toxicity in J774 macrophages. Additionally, the kinetics and radical products of AcEPs’ reaction with iron, the formation of radicals by AcEPs in Leishmania, as well as the resulting impairment of parasite mitochondrial functions were studied. Using electron paramagnetic resonance combined with spin trapping, photometry, and fluorescence-based oximetry, AcEPs were demonstrated to (i) show antileishmanial activity in vitro at IC50 values in a low micromolar range, (ii) exhibit host cell toxicity in J774 macrophages, (iii) react rapidly with iron (II) resulting in the formation of oxygen- and carbon-centered radicals, (iv) produce carbon-centered radicals which could secondarily trigger superoxide radical formation in Leishmania, and (v) impair mitochondrial functions in Leishmania during parasite killing. Overall, the data of different AcEPs demonstrate that their structures besides the peroxo bridge strongly influence their activity and mechanism of their antileishmanial action.
The Maghreb region (from Tunisia to Gibraltar) is a key area in the western Mediterranean to study the active tectonics and stress pattern across the Africa-Eurasia convergent plate boundary. In the present study, we compile comprehensive data set of well-constrained crustal stress indicators (from single focal mechanism solutions, formal inversion of focal mechanism solutions, and young geologic fault slip data) based on our and published data analyses. Stress inversion of focal mechanisms reveals a first-order transpression-compatible stress field and a second-order spatial variation of tectonic regime across the Maghreb region, with a relatively stable S-Hmax orientation from east to west. Therefore, the present-day active contraction of the western Africa-Eurasia plate boundary is accommodated by (1) E-W strike-slip faulting with reverse component along the Eastern Tell and Saharan-Tunisian Atlas, (2) a predominantly NE trending thrust faulting with strike-slip component in the Western Tell part, and (3) a conjugate strike-slip faulting regime with normal component in the Alboran/Rif domain. This spatial variation of the present-day stress field and faulting regime is relatively in agreement with the inferred stress information from neotectonic features. According to existing and newly proposed structural models, we highlight the role of main geometrically complex shear zones in the present-day stress pattern of the Maghreb region. Then, different geometries of these major inherited strike-slip faults and its related fractures (V-shaped conjugate fractures, horsetail splays faults, and Riedel fractures) impose their component on the second- and third-order stress regimes. Neotectonic and smoothed present-day stress map (mean S-Hmax orientation) reveal that plate boundary forces acting on the Africa-Eurasia collisional plates control the long wavelength of the stress field pattern in the Maghreb. The current tectonic deformations and the upper crustal stress field in the study area are governed by the interplay of the oblique plate convergence (i.e., Africa-Eurasia), lithosphere-mantle interaction, and preexisting tectonic weakness zones.
We performed leaching tests at elevated temperatures and pressures with an Alum black shale from Bomholm, Denmark and a Posidonia black shale from Lower Saxony, Germany. The Alum shale is a carbonate free black shale with pyrite and barite, containing 74.4 mu g/g U. The Posidonia shales is a calcareous shale with pyrite but without detectable amounts of barite containing 3.6 mu g/g U. Pyrite oxidized during the tests forming sulfuric acid which lowered the pH on values between 2 and 3 of the extraction fluid from the Alum shale favoring a release of U from the Alum shale to the fluid during the short-term and in the beginning of the long-term experiments. The activity concentration of U-238 is as high as 23.9 mBq/ml in the fluid for those experiments. The release of U and Th into the fluid is almost independent of pressure. The amount of uranium in the European shales is similar to that of the Marcellus Shale in the United States but the daughter product of U-238, the Ra-226 activity concentrations in the experimentally derived leachates from the European shales are quite low in comparison to that found in industrially derived flowback fluids from the Marcellus shale. This difference could mainly be due to missing Cl in the reaction fluid used in our experiments and a lower fluid to solid ratio in the industrial plays than in the experiments due to subsequent fracking and minute cracks from which Ra can easily be released.
Earth’s surface temperature will continue to rise for another 20 to 30 years even with the strongest carbon emission reduction currently considered. The associated changes in rainfall patterns can result in an increased flood risk worldwide. We compute the required increase in flood protection to keep high-end fluvial flood risk at present levels. The analysis is carried out worldwide for subnational administrative units. Most of the United States, Central Europe, and Northeast and West Africa, as well as large parts of India and Indonesia, require the strongest adaptation effort. More than half of the United States needs to at least double their protection within the next two decades. Thus, the need for adaptation to increased river flood is a global problem affecting industrialized regions as much as developing countries.
In the present paper, we study the problem of existence of honest and adaptive confidence sets for matrix completion. We consider two statistical models: the trace regression model and the Bernoulli model. In the trace regression model, we show that honest confidence sets that adapt to the unknown rank of the matrix exist even when the error variance is unknown. Contrary to this, we prove that in the Bernoulli model, honest and adaptive confidence sets exist only when the error variance is known a priori. In the course of our proofs, we obtain bounds for the minimax rates of certain composite hypothesis testing problems arising in low rank inference.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.