Refine
Year of publication
- 2019 (2590) (remove)
Document Type
- Article (1679)
- Doctoral Thesis (257)
- Postprint (211)
- Other (166)
- Monograph/Edited Volume (81)
- Review (67)
- Working Paper (36)
- Part of a Book (28)
- Part of Periodical (21)
- Master's Thesis (12)
Language
Keywords
- morphology (34)
- linguistics (31)
- syntax (31)
- Informationsstruktur (30)
- Morphologie (30)
- information structure (30)
- Festschrift (29)
- Linguistik (29)
- Syntax (29)
- festschrift (29)
Institute
- Institut für Biochemie und Biologie (352)
- Institut für Physik und Astronomie (312)
- Institut für Geowissenschaften (276)
- Institut für Chemie (183)
- Department Psychologie (110)
- Institut für Ernährungswissenschaft (89)
- Department Linguistik (86)
- Institut für Umweltwissenschaften und Geographie (80)
- Wirtschaftswissenschaften (73)
- Hasso-Plattner-Institut für Digital Engineering GmbH (69)
Hepcidin-25 was identified as themain iron regulator in the human body, and it by binds to the sole iron-exporter ferroportin. Studies showed that the N-terminus of hepcidin is responsible for this interaction, the same N-terminus that encompasses a small copper(II) binding site known as the ATCUN (amino-terminal Cu(II)- and Ni(II)-binding) motif. Interestingly, this copper-binding property is largely ignored in most papers dealing with hepcidin-25. In this context, detailed investigations of the complex formed between hepcidin-25 and copper could reveal insight into its biological role. The present work focuses on metal-bound hepcidin-25 that can be considered the biologically active form. The first part is devoted to the reversed-phase chromatographic separation of copper-bound and copper-free hepcidin-25 achieved by applying basic mobile phases containing 0.1% ammonia. Further, mass spectrometry (tandemmass spectrometry (MS/MS), high-resolutionmass spectrometry (HRMS)) and nuclear magnetic resonance (NMR) spectroscopy were employed to characterize the copper-peptide. Lastly, a three-dimensional (3D)model of hepcidin-25with bound copper(II) is presented. The identification of metal complexes and potential isoforms and isomers, from which the latter usually are left undetected by mass spectrometry, led to the conclusion that complementary analytical methods are needed to characterize a peptide calibrant or referencematerial comprehensively. Quantitative nuclear magnetic resonance (qNMR), inductively-coupled plasma mass spectrometry (ICP-MS), ion-mobility spectrometry (IMS) and chiral amino acid analysis (AAA) should be considered among others.
Gamma-ray bursts (GRBs) are brief flashes of gamma-rays and are considered to be the most energetic explosive phenomena in the Universe(1). The emission from GRBs comprises a short (typically tens of seconds) and bright prompt emission, followed by a much longer afterglow phase. During the afterglow phase, the shocked outflow-produced by the interaction between the ejected matter and the circumburst medium-slows down, and a gradual decrease in brightness is observed(2). GRBs typically emit most of their energy via.-rays with energies in the kiloelectronvolt-to-megaelectronvolt range, but a few photons with energies of tens of gigaelectronvolts have been detected by space-based instruments(3). However, the origins of such high-energy (above one gigaelectronvolt) photons and the presence of very-high-energy (more than 100 gigaelectronvolts) emission have remained elusive(4). Here we report observations of very-high-energy emission in the bright GRB 180720B deep in the GRB afterglow-ten hours after the end of the prompt emission phase, when the X-ray flux had already decayed by four orders of magnitude. Two possible explanations exist for the observed radiation: inverse Compton emission and synchrotron emission of ultrarelativistic electrons. Our observations show that the energy fluxes in the X-ray and gamma-ray range and their photon indices remain comparable to each other throughout the afterglow. This discovery places distinct constraints on the GRB environment for both emission mechanisms, with the inverse Compton explanation alleviating the particle energy requirements for the emission observed at late times. The late timing of this detection has consequences for the future observations of GRBs at the highest energies.
The flat spectrum radio quasar 3C 279 is known to exhibit pronounced variability in the high-energy (100MeV < E < 100 GeV) gamma-ray band, which is continuously monitored with Fermi-LAT. During two periods of high activity in April 2014 and June 2015 target-of-opportunity observations were undertaken with the High Energy Stereoscopic System (H.E.S.S.) in the very-high-energy (VHE, E > 100 GeV) gamma-ray domain. While the observation in 2014 provides an upper limit, the observation in 2015 results in a signal with 8 : 7 sigma significance above an energy threshold of 66 GeV. No VHE variability was detected during the 2015 observations. The VHE photon spectrum is soft and described by a power-law index of 4.2 +/- 0.3. The H.E.S.S. data along with a detailed and contemporaneous multiwavelength data set provide constraints on the physical parameters of the emission region. The minimum distance of the emission region from the central black hole was estimated using two plausible geometries of the broad-line region and three potential intrinsic spectra. The emission region is confidently placed at r greater than or similar to 1 : 7 X 1017 cm from the black hole, that is beyond the assumed distance of the broad-line region. Time-dependent leptonic and lepto-hadronic one-zone models were used to describe the evolution of the 2015 flare. Neither model can fully reproduce the observations, despite testing various parameter sets. Furthermore, the H.E.S.S. data were used to derive constraints on Lorentz invariance violation given the large redshift of 3C 279.
Young core-collapse supernovae with dense-wind progenitors may be able to accelerate cosmic-ray hadrons beyond the knee of the cosmic-ray spectrum, and this may result in measurable gamma-ray emission. We searched for gamma-ray emission from ten super- novae observed with the High Energy Stereoscopic System (H.E.S.S.) within a year of the supernova event. Nine supernovae were observed serendipitously in the H.E.S.S. data collected between December 2003 and December 2014, with exposure times ranging from 1.4 to 53 h. In addition we observed SN 2016adj as a target of opportunity in February 2016 for 13 h. No significant gamma-ray emission has been detected for any of the objects, and upper limits on the >1 TeV gamma-ray flux of the order of similar to 10(-13) cm(-)(2)s(-1) are established, corresponding to upper limits on the luminosities in the range similar to 2 x 10(39) to similar to 1 x 10(42) erg s(-1). These values are used to place model-dependent constraints on the mass-loss rates of the progenitor stars, implying upper limits between similar to 2 x 10(-5) and similar to 2 x 10(-3) M-circle dot yr(-1) under reasonable assumptions on the particle acceleration parameters.
The blazar Mrk 501 (z = 0.034) was observed at very-high-energy (VHE, E greater than or similar to 100 GeV) gamma-ray wavelengths during a bright flare on the night of 2014 June 23-24 (MJD 56832) with the H.E.S.S. phase-II array of Cherenkov telescopes. Data taken that night by H.E.S.S. at large zenith angle reveal an exceptional number of gamma-ray photons at multi-TeV energies, with rapid flux variability and an energy coverage extending significantly up to 20 TeV. This data set is used to constrain Lorentz invariance violation (LIV) using two independent channels: a temporal approach considers the possibility of an energy dependence in the arrival time of gamma-rays, whereas a spectral approach considers the possibility of modifications to the interaction of VHE gamma-rays with extragalactic background light (EBL) photons. The non-detection of energy-dependent time delays and the non-observation of deviations between the measured spectrum and that of a supposed power-law intrinsic spectrum with standard EBL attenuation are used independently to derive strong constraints on the energy scale of LIV (E-QG) in the subluminal scenario for linear and quadratic perturbations in the dispersion relation of photons. For the case of linear perturbations, the 95% confidence level limits obtained are E-QG,E-1 > 3.6 x 10(17) GeV using the temporal approach and E-QG,E-1 > 2.6 x 10(19) GeV using the spectral approach. For the case of quadratic perturbations, the limits obtained are E-QG,E-2 > 8.5 x 10(10) GeV using the temporal approach and E-QG,E-2 > 7.8 x 10(11) GeV using the spectral approach.
PKS 1830-211 is a known macrolensed quasar located at a redshift of z = 2.5. Its highenergy gamma-ray emission has been detected with the Fermi-Large Area Telescope (LAT) instrument and evidence for lensing was obtained by several authors from its high-energy data. Observations of PKS 1830-211 were taken with the High Energy Stereoscopic System (H.E.S.S.) array of Imaging Atmospheric Cherenkov Telescopes in 2014 August, following a flare alert by the Fermi-LAT Collaboration. The H.E.S.S observations were aimed at detecting a gamma-ray flare delayed by 20-27 d from the alert flare, as expected from observations at other wavelengths. More than 12 h of good-quality data were taken with an analysis threshold of similar to 67 GeV. The significance of a potential signal is computed as a function of the date and the average significance over the whole period. Data are compared to simultaneous observations by Fermi-LAT. No photon excess or significant signal is detected. An upper limit on PKS 1830-211 flux above 67 GeV is computed and compared to the extrapolation of the Fermi-LAT flare spectrum.
Context. We present a detailed view of the pulsar wind nebula (PWN) HESS J1825-137. We aim to constrain the mechanisms dominating the particle transport within the nebula, accounting for its anomalously large size and spectral characteristics. Aims. The nebula was studied using a deep exposure from over 12 years of H.E.S.S. I operation, together with data from H.E.S.S. II that improve the low-energy sensitivity. Enhanced energy-dependent morphological and spatially resolved spectral analyses probe the very high energy (VHE, E > 0.1 TeV) gamma-ray properties of the nebula. Methods. The nebula emission is revealed to extend out to 1.5 degrees from the pulsar, similar to 1.5 times farther than previously seen, making HESS J1825-137, with an intrinsic diameter of similar to 100 pc, potentially the largest gamma-ray PWN currently known. Characterising the strongly energy-dependent morphology of the nebula enables us to constrain the particle transport mechanisms. A dependence of the nebula extent with energy of R proportional to E alpha with alpha = -0.29 +/- 0.04(stat) +/- 0.05(sys) disfavours a pure diffusion scenario for particle transport within the nebula. The total gamma-ray flux of the nebula above 1 TeV is found to be (1.12 +/- 0.03(stat) +/- 0.25(sys)) +/- 10(-11) cm(-2) s(-1), corresponding to similar to 64% of the flux of the Crab nebula. Results. HESS J1825-137 is a PWN with clearly energy-dependent morphology at VHE gamma-ray energies. This source is used as a laboratory to investigate particle transport within intermediate-age PWNe. Based on deep observations of this highly spatially extended PWN, we produce a spectral map of the region that provides insights into the spectral variation within the nebula.
Context. Pulsar wind nebulae (PWNe) represent the most prominent population of Galactic very-high-energy gamma-ray sources and are thought to be an efficient source of leptonic cosmic rays. Vela X is a nearby middle-aged PWN, which shows bright X-ray and TeV gamma-ray emission towards an elongated structure called the cocoon. Aims. Since TeV emission is likely inverse-Compton emission of electrons, predominantly from interactions with the cosmic microwave background, while X-ray emission is synchrotron radiation of the same electrons, we aim to derive the properties of the relativistic particles and of magnetic fields with minimal modelling. Methods. We used data from the Suzaku XIS to derive the spectra from three compact regions in Vela X covering distances from 0.3 to 4 pc from the pulsar along the cocoon. We obtained gamma-ray spectra of the same regions from H.E.S.S. observations and fitted a radiative model to the multi-wavelength spectra. Results. The TeV electron spectra and magnetic field strengths are consistent within the uncertainties for the three regions, with energy densities of the order 10(-12) erg cm(-3). The data indicate the presence of a cutoff in the electron spectrum at energies of similar to 100 TeV and a magnetic field strength of similar to 6 mu G. Constraints on the presence of turbulent magnetic fields are weak. Conclusions. The pressure of TeV electrons and magnetic fields in the cocoon is dynamically negligible, requiring the presence of another dominant pressure component to balance the pulsar wind at the termination shock. Sub-TeV electrons cannot completely account for the missing pressure, which may be provided either by relativistic ions or from mixing of the ejecta with the pulsar wind. The electron spectra are consistent with expectations from transport scenarios dominated either by advection via the reverse shock or by diffusion, but for the latter the role of radiative losses near the termination shock needs to be further investigated in the light of the measured cutoff energies. Constraints on turbulent magnetic fields and the shape of the electron cutoff can be improved by spectral measurements in the energy range greater than or similar to 10 keV.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Background There is scant information on the breastmilk vitamin A (BMVA) concentration of lactating women in developing countries, partly due to lack of methods applicable in-field. Objective To assess BMVA concentrations of samples collected from lactating women of children aged 6-23 months, in Mecha district, Ethiopia. Subjects/methods Data on socio-demographic and anthropometric characteristics were collected from randomly selected lactating women (n = 104). Breast milk samples were collected and vitamin A concentrations were analyzed using HPLC and iCheck FLUORO then the two measurements were compared. Results The prevalence of underweight (BMI < 18.5 kg/m(2)) among lactating women was 17%. Seventy six percent of the BMVA values were < 1.05 mu mol/l and 81% were < 8 mu g/g fat. The mean BMVA concentration accounted to 41% of the estimated average value for mothers in developing countries. The BMVA values from HPLC and iCheck were correlated (r = 0.59, p = < 0.001), but it was not strong. Conclusions The result indicates the low vitamin A status of the lactating women and their children. It further indicates that intake assessments should not use average BMVA composition. The possibility of using iCheck for monitoring interventions designed to improve vitamin A status of lactating women with low BMVA requires further investigation.
As digital media infiltrate an increasingly greater proportion of our lives, concern about the possibility of various forms of technology addictions has emerged. For technology addiction, researchers have developed a variety of self-reported scales in areas such as Internet, smartphones, videogames, social network sites (SNS) or television. However, no uniform criteria or definition exists for technology addiction. Utilized dimensions of technology addiction, to measure specific outcomes, lack a conceptual standard. Therefore, linkages between technology areas dimensions have not been examined in a broader way by the research community, in order to develop a uniform technology addiction scale.
In this regard, firstly, a theoretical model was developed in order to extract common technology dimensions. Secondly, a systematic literature review in the areas of Internet, smartphone, video games and SNS was conducted in order to extract the dimensions used. To identify relevant studies, nine databases (GoogleScholar, ScienceDirect, PubMed, EmeraldInsight, Wiley, SpringerLink, ACM, iEEE and JSTOR) were searched, producing 4698 results, and 50 studies met the inclusion criteria. Thirdly, the developed theoretical model was utilized in order to determine the dimension in each of the identified scales.
Based on analysis of the dimensional distributions, the findings suggest that there are common dimensions across areas of technology such as “compulsive use” and “negative outcomes” but also differences in dimensions across areas such as “social comfort” and “mood regulation”, which are more used in the area of SNS. Moreover, new dimensions such as “cognitive absorption” or “utility and function loss" for technology addiction were extracted, which should be considered as these have not yet been researched in a broader way. In addition, no gold standard for the conceptual criteria or definition for technology addiction has been developed yet.
The extragalactic background light (EBL), a diffuse photon field in the optical and infrared range, is a record of radiative processes over the universe?s history. Spectral measurements of blazars at very high energies (>100 GeV) enable the reconstruction of the spectral energy distribution (SED) of the EBL, as the blazar spectra are modified by redshift- and energy-dependent interactions of the gamma-ray photons with the EBL. The spectra of 14 VERITAS-detected blazars are included in a new measurement of the EBL SED that is independent of EBL SED models. The resulting SED covers an EBL wavelength range of 0.56?56 ?m, and is in good agreement with lower limits obtained by assuming that the EBL is entirely due to radiation from cataloged galaxies.
Öffentliches Rechnungswesen
(2019)
A new isoflavone, 4′-prenyloxyvigvexin A (1) and a new pterocarpan, (6aR,11aR)-3,8-dimethoxybitucarpin B (2) were isolated from the leaves of Lonchocarpus bussei and the stem bark of Lonchocarpus eriocalyx, respectively. The extract of L. bussei also gave four known isoflavones, maximaisoflavone H, 7,2′-dimethoxy-3′,4′-methylenedioxyisoflavone, 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone, durmillone; a chalcone, 4-hydroxylonchocarpin; a geranylated phenylpropanol, colenemol; and two known pterocarpans, (6aR,11aR)-maackiain and (6aR,11aR)-edunol. (6aR,11aR)-Edunol was also isolated from the stem bark of L. eriocalyx. The structures of the isolated compounds were elucidated by spectroscopy. The cytotoxicity of the compounds was tested by resazurin assay using drug-sensitive and multidrug-resistant cancer cell lines. Significant antiproliferative effects with IC50 values below 10 μM were observed for the isoflavones 6,7,3′-trimethoxy-4′,5′-methylenedioxyisoflavone and durmillone against leukemia CCRF-CEM cells; for the chalcone, 4-hydroxylonchocarpin and durmillone against its resistant counterpart CEM/ADR5000 cells; as well as for durmillone against the resistant breast adenocarcinoma MDA-MB231/BCRP cells and resistant gliobastoma U87MG.ΔEGFR cells.
Background: While incidences of cancer are continuously increasing, drug resistance of malignant cells is observed towards almost all pharmaceuticals. Several isoflavonoids and flavonoids are known for their cytotoxicity towards various cancer cells. Methods: The cytotoxicity of compounds was determined based on the resazurin reduction assay. Caspases activation was evaluated using the caspase-Glo assay. Flow cytometry was used to analyze the cell cycle (propodium iodide (PI) staining), apoptosis (annexin V/PI staining), mitochondrial membrane potential (MMP) (JC-1) and reactive oxygen species (ROS) (H2DCFH-DA). CCRF-CEM leukemia cells were used as model cells for mechanistic studies. Results: Compounds 1, 2 and 4 displayed IC50 values below 20 mu M towards CCRF-CEM and CEM/ADR5000 leukemia cells, and were further tested towards a panel of 7 carcinoma cells. The IC50 values of the compounds against carcinoma cells varied from 16.90 mu M (in resistant U87MG.Delta EGFR glioblastoma cells) to 48.67 mu M (against HepG2 hepatocarcinoma cells) for 1, from 7.85 mu M (in U87MG.Delta EGFR cells) to 14.44 mu M (in resistant MDA-MB231/BCRP breast adenocarcinoma cells) for 2, from 4.96 mu M (towards U87MG.Delta EGFRcells) to 7.76 mu M (against MDA-MB231/BCRP cells) for 4, and from 0.07 mu M (against MDA-MB231 cells) to 2.15 mu M (against HepG2 cells) for doxorubicin. Compounds 2 and 4 induced apoptosis in CCRF-CEM cells mediated by MMP alteration and increased ROS production. Conclusion: The present report indicates that isoflavones and biflavonoids from Ormocarpum kirkii are cytotoxic compounds with the potential of being exploited in cancer chemotherapy. Compounds 2 and 4 deserve further studies to develop new anticancer drugs to fight sensitive and resistant cancer cell lines.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
In this work we tackle the problem of checking strong equivalence of logic programs that may contain local auxiliary atoms, to be removed from their stable models and to be forbidden in any external context. We call this property projective strong equivalence (PSE). It has been recently proved that not any logic program containing auxiliary atoms can be reformulated, under PSE, as another logic program or formula without them – this is known as strongly persistent forgetting. In this paper, we introduce a conservative extension of Equilibrium Logic and its monotonic basis, the logic of Here-and-There, in which we deal with a new connective ‘|’ we call fork. We provide a semantic characterisation of PSE for forks and use it to show that, in this extension, it is always possible to forget auxiliary atoms under strong persistence. We further define when the obtained fork is representable as a regular formula.
The Central Asian Pamir Mountains (Pamirs) are a high-altitude region sensitive to climatic change, with only few paleoclimatic records available. To examine the glacial-interglacial hydrological changes in the region, we analyzed the geochemical parameters of a 31-kyr record from Lake Karakul and performed a set of experiments with climate models to interpret the results. delta D values of terrestrial biomarkers showed insolation-driven trends reflecting major shifts of water vapor sources. For aquatic biomarkers, positive delta D shifts driven by changes in precipitation seasonality were observed at ca. 31-30, 28-26, and 17-14 kyr BP. Multiproxy paleoecological data and modelling results suggest that increased water availability, induced by decreased summer evaporation, triggered higher lake levels during those episodes, possibly synchronous to northern hemispheric rapid climate events. We conclude that seasonal changes in precipitation-evaporation balance significantly influenced the hydrological state of a large waterbody such as Lake Karakul, while annual precipitation amount and inflows remained fairly constant.
Self-assembly and crosslinking approaches of double hydrophilic linear-brush block copolymers
(2019)
Self-assembly of block copolymers is a significant area of polymer science. The self-assembly of completely water-soluble block copolymers is of particular interest, albeit a challenging task. In the present work the self-assembly of a linear-brush architecture block copolymer, namely poly(N-vinylpyrrolidone)-b-poly(oligoethylene glycol methacrylate) (PVP-b-POEGMA), in water is studied. Moreover, the assembled structures are crosslinked via alpha-CD host/guest complexation in a supramolecular way. The crosslinking shifts the equilibrium toward aggregate formation without switching off the dynamic equilibrium of double hydrophilic block copolymer (DHBC). As a consequence, the self-assembly efficiency is improved without extinguishing the unique DHBC self-assembly behavior. In addition, decrosslinking could be induced without a change in concentration by adding a competing complexation agent for alpha-CD. The self-assembly behavior was followed by DLS measurement, while the presence of the particles could be observed via cryo-TEM before and after crosslinking.
Sinkholes and depressions are typical landforms of karst regions. They pose a considerable natural hazard to infrastructure, agriculture, economy and human life in affected areas worldwide. The physio-chemical processes of sinkholes and depression formation are manifold, ranging from dissolution and material erosion in the subsurface to mechanical subsidence/failure of the overburden. This thesis addresses the mechanisms leading to the development of sinkholes and depressions by using complementary methods: remote sensing, distinct element modelling and near-surface geophysics.
In the first part, detailed information about the (hydro)-geological background, ground structures, morphologies and spatio-temporal development of sinkholes and depressions at a very active karst area at the Dead Sea are derived from satellite image analysis, photogrammetry and geologic field surveys. There, clusters of an increasing number of sinkholes have been developing since the 1980s within large-scale depressions and are distributed over different kinds of surface materials: clayey mud, sandy-gravel alluvium and lacustrine evaporites (salt). The morphology of sinkholes differs depending in which material they form: Sinkholes in sandy-gravel alluvium and salt are generally deeper and narrower than sinkholes in the interbedded evaporite and mud deposits. From repeated aerial surveys, collapse precursory features like small-scale subsidence, individual holes and cracks are identified in all materials. The analysis sheds light on the ongoing hazardous subsidence process, which is driven by the base-level fall of the Dead Sea and by the dynamic formation of subsurface water channels.
In the second part of this thesis, a novel, 2D distinct element geomechanical modelling approach with the software PFC2D-V5 to simulating individual and multiple cavity growth and sinkhole and large-scale depression development is presented. The approach involves a stepwise material removal technique in void spaces of arbitrarily shaped geometries and is benchmarked by analytical and boundary element method solutions for circular cavities. Simulated compression and tension tests are used to calibrate model parameters with bulk rock properties for the materials of the field site. The simulations show that cavity and sinkhole evolution is controlled by material strength of both overburden and cavity host material, the depth and relative speed of the cavity growth and the developed stress pattern in the subsurface. Major findings are: (1) A progressively deepening differential subrosion with variable growth speed yields a more fragmented stress pattern with stress interaction between the cavities. It favours multiple sinkhole collapses and nesting within large-scale depressions. (2) Low-strength materials do not support large cavities in the material removal zone, and subsidence is mainly characterised by gradual sagging into the material removal zone with synclinal bending. (3) High-strength materials support large cavity formation, leading to sinkhole formation by sudden collapse of the overburden. (4) Large-scale depression formation happens either by coalescence of collapsing holes, block-wise brittle failure, or gradual sagging and lateral widening.
The distinct element based approach is compared to results from remote sensing and geophysics at the field site. The numerical simulation outcomes are generally in good agreement with derived morphometrics, documented surface and subsurface structures as well as seismic velocities. Complementary findings on the subrosion process are provided from electric and seismic measurements in the area.
Based on the novel combination of methods in this thesis, a generic model of karst landform evolution with focus on sinkhole and depression formation is developed. A deepening subrosion system related to preferential flow paths evolves and creates void spaces and subsurface conduits. This subsequently leads to hazardous subsidence, and the formation of sinkholes within large-scale depressions. Finally, a monitoring system for shallow natural hazard phenomena consisting of geodetic and geophysical observations is proposed for similarly affected areas.
The 2-D distinct element method (DEM) code (PFC2D_V5) is used here to simulate the evolution of subsidence-related karst landforms, such as single and clustered sinkholes, and associated larger-scale depressions. Subsurface material in the DEM model is removed progressively to produce an array of cavities; this simulates a network of subsurface groundwater conduits growing by chemical/mechanical erosion. The growth of the cavity array is coupled mechanically to the gravitationally loaded surroundings, such that cavities can grow also in part by material failure at their margins, which in the limit can produce individual collapse sinkholes. Two end-member growth scenarios of the cavity array and their impact on surface subsidence were examined in the models: (1) cavity growth at the same depth level and growth rate; (2) cavity growth at progressively deepening levels with varying growth rates. These growth scenarios are characterised by differing stress patterns across the cavity array and its overburden, which are in turn an important factor for the formation of sinkholes and uvalalike depressions. For growth scenario (1), a stable compression arch is established around the entire cavity array, hindering sinkhole collapse into individual cavities and favouring block-wise, relatively even subsidence across the whole cavity array. In contrast, for growth scenario (2), the stress system is more heterogeneous, such that local stress concentrations exist around individual cavities, leading to stress interactions and local wall/overburden fractures. Consequently, sinkhole collapses occur in individual cavities, which results in uneven, differential subsidence within a larger-scale depression. Depending on material properties of the cavity-hosting material and the overburden, the larger-scale depression forms either by sinkhole coalescence or by widespread subsidence linked geometrically to the entire cavity array. The results from models with growth scenario (2) are in close agreement with surface morphological and subsurface geophysical observations from an evaporite karst area on the eastern shore of the Dead Sea.
The plasma membrane (PM) is at the interface of plant-pathogen interactions and, thus, many bacterial type-III effector (T3E) proteins target membrane-associated processes to interfere with immunity. The Pseudomonas syringae T3E HopZ1a is a host cell PM-localized effector protein that has several immunity-associated host targets but also activates effector-triggered immunity in resistant backgrounds. Although HopZ1a has been shown to interfere with early defense signaling at the PM, no dedicated PM-associated HopZ1a target protein has been identified until now. Here, we show that HopZ1a interacts with the PM-associated remorin protein NbREM4 from Nicotiana benthamiana in several independent assays. NbREM4 relocalizes to membrane nanodomains after treatment with the bacterial elicitor flg22 and transient overexpression of NbREM4 in N. benthamiana induces the expression of a subset of defense-related genes. We can further show that NbREM4 interacts with the immune-related receptor-like cytoplasmic kinase avrPphB-susceptible 1 (PBS1) and is phosphorylated by PBS1 on several residues in vitro. Thus, we conclude that NbREM4 is associated with early defense signaling at the PM. The possible relevance of the HopZ1a-NbREM4 interaction for HopZ1a virulence and avirulence functions is discussed.
This study is concerned with repair practices that a teacher and students employ to restore intersubjectivity when faced with interactional problems in a Content and Language Integrated Learning (CLIL) classroom. Adopting a conversation analytic (CA) approach, it examines the interactional treatment of students’ verbal and embodied trouble displays in a video-recorded, teacher-fronted geography lesson held in English at a German high school. At the same time, it explores to what extent the repair practices employed are fitted to this specific interactional context. The analysis shows that students’ verbal trouble displays often result in extensive repair sequences, whereas students’ embodied trouble displays are usually met with teacher self-repair in the transition space. In this way, the latter are resolved much earlier and more quickly. The study further reveals practices like reformulation and translation to be especially useful for repairing interactional problems in classrooms in which a foreign language is used as the medium of instruction. The findings may be of interest for prospective as well as practicing teachers in that they provide relevant insights into how interactional trouble can be successfully managed in (CLIL) classroom interaction.
A form-function mismatch?
(2019)
Detect me if you can
(2019)
Spam Bots have become a threat to online social networks with their malicious behavior, posting misinformation messages and influencing online platforms to fulfill their motives. As spam bots have become more advanced over time, creating algorithms to identify bots remains an open challenge. Learning low-dimensional embeddings for nodes in graph structured data has proven to be useful in various domains. In this paper, we propose a model based on graph convolutional neural networks (GCNN) for spam bot detection. Our hypothesis is that to better detect spam bots, in addition to defining a features set, the social graph must also be taken into consideration. GCNNs are able to leverage both the features of a node and aggregate the features of a node’s neighborhood. We compare our approach, with two methods that work solely on a features set and on the structure of the graph. To our knowledge, this work is the first attempt of using graph convolutional neural networks in spam bot detection.
Zinc is an essential trace element, making it crucial to have a reliable biomarker for evaluating an individual’s zinc status. The total serum zinc concentration, which is presently the most commonly used biomarker, is not ideal for this purpose, but a superior alternative is still missing. The free zinc concentration, which describes the fraction of zinc that is only loosely bound and easily exchangeable, has been proposed for this purpose, as it reflects the highly bioavailable part of serum zinc. This report presents a fluorescence-based method for determining the free zinc concentration in human serum samples, using the fluorescent probe Zinpyr-1. The assay has been applied on 154 commercially obtained human serum samples. Measured free zinc concentrations ranged from 0.09 to 0.42 nM with a mean of 0.22 ± 0.05 nM. It did not correlate with age or the total serum concentrations of zinc, manganese, iron or selenium. A negative correlation between the concentration of free zinc and total copper has been seen for sera from females. In addition, the free zinc concentration in sera from females (0.21 ± 0.05 nM) was significantly lower than in males (0.23 ± 0.06 nM). The assay uses a sample volume of less than 10 µL, is rapid and cost-effective and allows us to address questions regarding factors influencing the free serum zinc concentration, its connection with the body’s zinc status, and its suitability as a future biomarker for an individual’s zinc status.
Zinc is an essential trace element, making it crucial to have a reliable biomarker for evaluating an individual’s zinc status. The total serum zinc concentration, which is presently the most commonly used biomarker, is not ideal for this purpose, but a superior alternative is still missing. The free zinc concentration, which describes the fraction of zinc that is only loosely bound and easily exchangeable, has been proposed for this purpose, as it reflects the highly bioavailable part of serum zinc. This report presents a fluorescence-based method for determining the free zinc concentration in human serum samples, using the fluorescent probe Zinpyr-1. The assay has been applied on 154 commercially obtained human serum samples. Measured free zinc concentrations ranged from 0.09 to 0.42 nM with a mean of 0.22 ± 0.05 nM. It did not correlate with age or the total serum concentrations of zinc, manganese, iron or selenium. A negative correlation between the concentration of free zinc and total copper has been seen for sera from females. In addition, the free zinc concentration in sera from females (0.21 ± 0.05 nM) was significantly lower than in males (0.23 ± 0.06 nM). The assay uses a sample volume of less than 10 µL, is rapid and cost-effective and allows us to address questions regarding factors influencing the free serum zinc concentration, its connection with the body’s zinc status, and its suitability as a future biomarker for an individual’s zinc status.
Ground-penetrating radar is widely used to provide highly resolved images of subsurface sedimentary structures, with implications for processes active in the vadose zone. Frequently overlooked among these structures are tunnels excavated by fossorial animals (e.g., moles). We present two repeated ground-penetrating radar surveys performed a year apart in 2016 and 2017. Careful three-dimensional data processing reveals, in each data set, a pattern of elongated structures that are interpreted as a subsurface mole tunnel network. Our data demonstrate the ability of three-dimensional ground-penetrating radar imaging to non-invasively delineate the small animal tunnels (similar to 5 cm diameter) at a higher spatial and geolocation resolution than has previously been achieved. In turn, this makes repeated surveys and, therefore, long-term monitoring possible. Our results offer valuable insight into the understanding of the near-surface and showcase a potential new application for a geophysical method as well as a non-invasive method of ecological surveying.
The synthesis of chiral nanoporous carbons based on chiral ionic liquids (CILs) of amino acids as precursors is described. Such unique precursors for the carbonization of CILs yield chiral carbonaceous materials with high surface area (approximate to 620 m(2) g(-1)). The enantioselectivities of the porous carbons are examined by advanced techniques such as selective adsorption of enantiomers using cyclic voltammetry, isothermal titration calorimetry, and mass spectrometry. These techniques demonstrate the chiral nature and high enantioselectivity of the chiral carbon materials. Overall, we believe that the novel approach presented here can contribute significantly to the development of new chiral carbon materials that will find important applications in chiral chemistry, such as in chiral catalysis and separation and in chiral sensors. From a scientific point of view, the approach and results reported here can significantly deepen our understanding of chirality at the nanoscale and of the structure and nature of chiral nonporous materials and surfaces.
By varying reaction parameters for the syntheses of the hydrogen-bonded metal-imidazolate frameworks (HIF) HIF-1 and HIF-2 (featuring 14 Zn and 14 Co atoms, respectively) to increase their yields and crystallinity, we found that HIF-1 is generated in two different frameworks, named as HIF-1a and HIF-1b. HIF-1b is isostructural to HIF-2. We determined the gas sorption and magnetic properties of HIF-2. In comparison to HIF-1a (Brunauer-Emmett-Teller (BET) surface area of 471m(2) g(-1)), HIF-2 possesses overall very low gas sorption uptake capacities [BET(CO2) surface area=85m(2) g(-1)]. Variable temperature magnetic susceptibility measurement of HIF-2 showed antiferromagnetic exchange interactions between the cobalt(II) high-spin centres at lower temperature. Theoretical analysis by density functional theory confirmed this finding. The UV/Vis-reflection spectra of HIF-1 (mixture of HIF-1a and b), HIF-2 and HIF-3 (with 14 Cd atoms) were measured and showed a characteristic absorption band centered at 340nm, which was indicative for differences in the imidazolate framework.
Luxemburg oder Lenin?
(2019)
Paul Frölichs Theorie zur Vergleichbarkeit von Revolutionen-Rekonstruktion eines Modellversuchs
(2019)
Vorwort
(2019)
Widely used diagnostic tools make use of antibodies recognizing targeted molecules, but additional techniques are required in order to alleviate the disadvantages of antibodies. Herein, molecular dynamic calculations are performed for the design of high affinity artificial protein binding surfaces for the recognition of neuron specific enolase (NSE), a known cancer biomarker. Computational simulations are employed to identify particularly stabile secondary structure elements. These epitopes are used for the subsequent molecular imprinting, where surface imprinting approach is applied. The molecular imprints generated with the calculated epitopes of greater stability (Cys-Ep1) show better binding properties than those of lower stability (Cys-Ep5). The average binding strength of imprints created with stabile epitopes is found to be around twofold and fourfold higher for the NSE derived peptide and NSE protein, respectively. The recognition of NSE is investigated in a wide concentration range, where high sensitivity (limit of detection (LOD) = 0.5 ng mL(-1)) and affinity (dissociation constant (K-d) = 5.3 x 10(-11)m) are achieved using Cys-Ep1 imprints reflecting the stable structure of the template molecules. This integrated approach employing stability calculations for the identification of stabile epitopes is expected to have a major impact on the future development of high affinity protein capturing binders.
Vagueness
(2019)
Though vague phenomena have been studied extensively for many decades, it is only in recent years that researchers sought the support of quantitative data. This chapter highlights and discusses the insights that experimental methods brought to the study of vagueness. One area focused on are ‘borderline contradictions’, that is, sentences like ‘She is neither tall nor not tall’ that are contradictory when analysed in classical logic, but are actually acceptable as descriptions of borderline cases. The flourishing of theories and experimental studies that borderline contradictions have led to are examined closely. Beyond this illustrative case, an overview of recent studies that concern the classification of types of vagueness, the use of numbers, rounding, number modification, and the general pragmatic status of vagueness is provided.
In this paper we present new data on a subject/non-subject extraction asymmetry in Igbo constituent questions. We provide evidence that the superficially morphological phenomenon reflects a deeper syntactic asymmetry: Unlike wh-non-subjects, wh-subjects cannot undergo local (A) over bar -movement to the left periphery (SpecFoc); rather, they have to stay in their canonical position SpecT. The same constraint also leads to the that-trace effect (absence of the complementizer) in the embedded clause of long subject wh-movement. We argue that what is responsible for the special status of wh-subjects is their high structural position. We provide an optimality-theoretic analysis of the asymmetry that is based on anti-locality: Local subject (A) over bar -movement is excluded because it is too short. Moreover, we address the nature of apparent wh-in-situ in Igbo.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
This paper investigates the applicability of CMOS decoupling cells for mitigating the Single Event Transient (SET) effects in standard combinational gates. The concept is based on the insertion of two decoupling cells between the gate's output and the power/ground terminals. To verify the proposed hardening approach, extensive SPICE simulations have been performed with standard combinational cells designed in IHP's 130 nm bulk CMOS technology. Obtained simulation results have shown that the insertion of decoupling cells results in the increase of the gate's critical charge, thus reducing the gate's soft error rate (SER). Moreover, the decoupling cells facilitate the suppression of SET pulses propagating through the gate. It has been shown that the decoupling cells may be a competitive alternative to gate upsizing and gate duplication for hardening the gates with lower critical charge and multiple (3 or 4) inputs, as well as for filtering the short SET pulses induced by low-LET particles.
In a previously published article in HIN under the title of “Eduard Dorsch and his unpublished poem on the occasion of Humboldt’s 100th birthday,” I elaborated on Dorsch’s poem that was read in Detroit in front of a German-American audience on Sept. 14, 1869, a day widely celebrated in the US in honor of Humboldt. Although it was not surprising that Dorsch wrote the occasional poem in the first place given his affinities with Humboldt’s world of thought, a discovery of a second occasional poem upon further research in Dorsch’s voluminous papers was indeed unexpected, in this case read on the same date in Monroe, Michigan. Although there are a number of similarities between the Detroit and Monroe versions, there are enough differences that warrant this addendum to my original article.
The epicardium, the outer mesothelial layer enclosing the myocardium, plays key roles in heart development and regeneration. During embryogenesis, the epicardium arises from the proepicardium (PE), a cell cluster that appears in the dorsal pericardium (DP) close to the venous pole of the heart. Little is known about how the PE emerges from the pericardial mesothelium. Using a zebrafish model and a combination of genetic tools, pharmacological agents and quantitative in vivo imaging, we reveal that a coordinated collective movement of DP cells drives PE formation. We found that Bmp signaling and the actomyosin cytoskeleton promote constriction of the DP, which enables PE cells to extrude apically. We provide evidence that cell extrusion, which has been described in the elimination of unfit cells from epithelia and the emergence of hematopoietic stem cells, is also a mechanism for PE cells to exit an organized mesothelium and fulfil their developmental fate to form a new tissue layer, the epicardium.
Coherent network partitions
(2019)
Graph clustering is widely applied in the analysis of cellular networks reconstructed from large-scale data or obtained from experimental evidence. Here we introduce a new type of graph clustering based on the concept of coherent partition. A coherent partition of a graph G is a partition of the vertices of G that yields only disconnected subgraphs in the complement of G. The coherence number of G is then the size of the smallest edge cut inducing a coherent partition. A coherent partition of G is optimal if the size of the inducing edge cut is the coherence number of G. Given a graph G, we study coherent partitions and the coherence number in connection to (bi)clique partitions and the (bi)clique cover number. We show that the problem of finding the coherence number is NP-hard, but is of polynomial time complexity for trees. We also discuss the relation between coherent partitions and prominent graph clustering quality measures.
Thawing of subsea permafrost can impact offshore infrastructure, affect coastal erosion, and release permafrost organic matter. Thawing is usually modeled as the result of heat transfer, although salt diffusion may play an important role in marine settings. To better quantify nearshore subsea permafrost thawing, we applied the CryoGRID2 heat diffusion model and coupled it to a salt diffusion model. We simulated coastline retreat and subsea permafrost evolution as it develops through successive stages of a thawing sequence at the Bykovsky Peninsula, Siberia. Sensitivity analyses for seawater salinity were performed to compare the results for the Bykovsky Peninsula with those of typical Arctic seawater. For the Bykovsky Peninsula, the modeled ice-bearing permafrost table (IBPT) for ice-rich sand and an erosion rate of 0.25m/year was 16.7 m below the seabed 350m offshore. The model outputs were compared to the IBPT depth estimated from coastline retreat and electrical resistivity surveys perpendicular to and crossing the shoreline of the Bykovsky Peninsula. The interpreted geoelectric data suggest that the IBPT dipped to 15-20m below the seabed at 350m offshore. Both results suggest that cold saline water forms beneath grounded ice and floating sea ice in shallow water, causing cryotic benthic temperatures. The freezing point depression produced by salt diffusion can delay or prevent ice formation in the sediment and enhance the IBPT degradation rate. Therefore, salt diffusion may facilitate the release of greenhouse gasses to the atmosphere and considerably affect the design of offshore and coastal infrastructure in subsea permafrost areas.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Das Praxissemester als praxisbezogenes Studienelement zur Förderung von Professions- und Reflexionskompetenzen ist in immer mehr Bundesländern integraler Bestandteil der Lehramtsausbildung. Eine zentrale Herausforderung ist hierbei die gelingende Integration von universitärer Theorie und schulischer Praxis. Das Forschende Lernen kann hierzu einen wichtigen Beitrag leisten, indem Herausforderungen aus der unterrichtlichen Praxis einem forschenden Blick unterzogen und mit wissenschaftlichen Methoden bearbeitet werden. Dies soll nicht zuletzt die Reflexionskompetenz der angehenden Lehrerinnen und Lehrer fördern.
The “output-orientation” is omnipresent in teacher education. In order to evaluate teachers' and students' performances, a wide range of different quantitative questionnaires exist worldwide. One important goal of teaching evaluation is to increase the quality of teaching and learning. The author argues, that standard evaluations which are typically made at the end of the semester are problematic due to two reasons. The first one is that some of the questions are too general and don`t offer concrete ideas as to what kind of actions can be taken to make the courses better. The second problem is that the evaluation is mostly made when the course is already over. Because of this criticism, Apelojg invented the Felix-App which offers the possibility to give feedback in real-time by asking for the emotions and needs that occur during different learning situations. The idea is very simple: positive emotions and satisfied needs are helpful for the learning process. Negative emotions and unsatisfied needs have negative effects on the learning process. First descriptive results show, that “managing emotions” during classes can have positive effects on both motivation and emotions.
Als erster Staatssekretär des 1949 gegründeten Bundesministeriums der Justiz war Walter Strauß maßgeblich für dessen personellen Aufbau verantwortlich. Während seiner Amtszeit, die erst 1963 endete, diente Strauß unter fünf verschiedenen Ministern. Damit verkörperte er die Kontinuität der Arbeit und galt nicht von ungefähr als der eigentliche 'Herrscher der Rosenburg', dem Bonner Amtssitz des Ministeriums. Durch seinen Führungsstil, der die Forderung nach Qualität mit einem geradezu paternalistischen Verantwortungsgefühl verband, prägte der Gründungsstaatssekretär den Geist des Hauses für lange Zeit. Obwohl er jüdischer Herkunft war und im Nationalsozialismus zum Kreis der rassisch Verfolgten gehört hatte, griff Strauß bei der Auswahl des Personals allerdings in hohem Maße auf die Mitarbeit von Personen zurück, die durch ihre Tätigkeit im 'Dritten Reich' belastet waren. Warum dies so war, sucht der Autor nicht nur anhand biografischer Prägungen, die Strauß im Kaiserreich, in der Weimarer Republik, dem Nationalsozialismus und der Besatzungszeit erfahren hatte, sondern auch durch eine umfassende Darstellung der wesentlichen Merkmale und Kennzeichen seiner Personalpolitik zu ergründen: Wie weit reichte sein Einfluss Welche Rolle spielte er bei Auswahl und Beförderungen, in erster Linie der Beamten des höheren Dienstes, in Abgrenzung zu anderen Akteuren Und in welchem Maße war er bei seinen Entscheidungen durch institutionelle Rahmenbedingungen eingeschränkt 366 pp
Back pain is a problem in adolescent athletes affecting postural control which is an important requirement for physical and daily activities whether under static or dynamic conditions. One leg stance and star excursion balance postural control tests are effective in measuring static and dynamic postural control respectively. These tests have been used in individuals with back pain, athletes and non-athletes without first establishing their reliabilities. In addition to this, there is no published literature investigating dynamic posture in adolescent athletes with back pain using the star excursion balance test. Therefore, the aim of the thesis was to assess deficit in postural control in adolescent athletes with and without back pain using static (one leg stance test) and dynamic postural (SEBT) control tests.
Adolescent athletes with and without back pain participated in the study. Static and dynamic postural control tests were performed using one leg stance and SEBT respectively. The reproducibility of both tests was established. Afterwards, it was determined whether there was an association between static and dynamic posture using the measure of displacement of the centre pressure and reach distance respectively. Finally, it was investigated whether there was a difference in postural control in adolescent athletes with and without back pain using the one leg stance test and the SEBT.
Fair to excellent reliabilities was recorded for the static (one leg stance) and dynamic (star excursion balance) postural control tests in the subjects of interest. No association was found between variables of the static and dynamic tests for the adolescent athletes with and without back pain. Also, no statistically significant difference was obtained between adolescent athletics with and without back pain using the static and dynamic postural control test.
One leg stance test and SEBT can be used as measures of postural control in adolescent athletes with and without back pain. Although static and dynamic postural control might be related, adolescent athletes with and without back pain might be using different mechanisms in controlling their static and dynamic posture. Consequently, static and dynamic postural control in adolescent athletes with back pain was not different from those without back pain. These outcome measures might not be challenging enough to detect deficit in postural control in our study group of interest.
An association between static and dynamic postural control exists in adults with back pain. We aimed to determine whether this association also exists in adolescent athletes with the same condition. In all, 128 athletes with and without back pain performed three measurements of 15s of static (one-legged stance) and dynamic (star excursion balance test) postural control tests. All subjects and amatched subgroup of athletes with and without back pain were analyzed. The smallest center of pressure mediolateral and anterior-posterior displacements (mm) and normalized highest reach distance were the outcome measures. No association was found between variables of the static and dynamic tests for all subjects and the matched group with and without back pain. The control of static and dynamic posture in adolescent athletes with and without back pain might not be related.
We present a model of the electrical resistivity structure of the lithosphere in the Central Andes between 20 degrees and 24 degrees S from 3-D inversion of 56 long-period magnetotelluric sites. Our model shows a complex resistivity structure with significant variability parallel and perpendicular to the trench direction. The continental forearc is characterized mainly by high electrical resistivity (>1,000m), suggesting overall low volumes of fluids. However, low resistivity zones (LRZs, <5m) were found in the continental forearc below areas where major trench-parallel faults systems intersect NW-SE transverse faults. Forearc LRZs indicate circulation and accumulation of fluids in highly permeable fault zones. The continental crust along the arc shows three distinctive resistivity domains, which coincide with segmentation in the distribution of volcanoes. The northern domain (20 degrees-20.5 degrees S) is characterized by resistivities >1,000m and the absence of active volcanism, suggesting the presence of a low-permeability block in the continental crust. The central domain (20.5 degrees-23 degrees S) exhibits a number of LRZs at varying depths, indicating different levels of a magmatic plumbing system. The southern domain (23 degrees-24 degrees S) is characterized by resistivities >1,000m, suggesting the absence of large magma reservoirs below the volcanic chain at crustal depths. Magma reservoirs located below the base of the crust or in the backarc may fed active volcanism in the southern domain. In the subcontinental mantle, the model exhibits LRZs in the forearc mantle wedge and above clusters of intermediate-depth seismicity, likely related to fluids produced by serpentinization of the mantle and eclogitization of the slab, respectively.
A Search for Pulsed Very High-energy Gamma-Rays from 13 Young Pulsars in Archival VERITAS Data
(2019)
We conduct a search for periodic emission in the very high-energy (VHE) gamma-ray band (E > 100 GeV) from a total of 13 pulsars in an archival VERITAS data set with a total exposure of over 450 hr. The set of pulsars includes many of the brightest young gamma-ray pulsars visible in the Northern Hemisphere. The data analysis resulted in nondetections of pulsed VHE gamma-rays from each pulsar. Upper limits on a potential VHE gamma-ray flux are derived at the 95% confidence level above three energy thresholds using two methods. These are the first such searches for pulsed VHE emission from each of the pulsars, and the obtained limits constrain a possible flux component manifesting at VHEs as is seen for the Crab pulsar.
Kosmologie beschreibt die Entwicklung des Universums als Ganzes. Kosmologische Entdeckungen in Theorie und Praxis haben daher unser modernes wissenschaftliches Weltbild entscheidend geprägt. Die Vermittlung eines modernen Weltbildes durch Unterricht ist ein häufiger Wunsch in der naturwissenschaftlichen Bildungsdiskussion. Dennoch existieren weiterhin Forschungs- und Entwicklungsbedarfe. Kosmologische Themen finden sich häufig in den Medien und sind gleichzeitig weiter vom Alltag entfernt, so dass sich hier besonders leicht wissenschaftlich inkorrekte Vorstellungen entwickeln können, die zu Problemen im Unterricht führen können.
Das Ziel dieser wissenschaftlichen Arbeit ist es, zu diesem Forschungsgebiet beizutragen und die Voraussetzungen hinsichtlich vorhandener Vorkenntnisse und Präkonzepte in Kosmologie, mit denen Schülerinnen und Schüler in den Unterricht kommen, zu untersuchen und anschließend mit denen anderer Länder zu vergleichen. Dies erfolgt anhand einer qualitativen Inhaltsanalyse eines offenen Fragebogens. Auf dieser Grundlage wird schließlich ein Multiple-Choice Fragebogen entwickelt, angewendet und evaluiert.
Die Ergebnisse zeigen große Wissenslücken im Bereich der Kosmologie auf und geben erste Hinweise auf vorhandene Unterschiede zwischen den Ländern. Es existieren ebenfalls einige teils weit verbreitete wissenschaftlich inkorrekte Vorstellungen wie beispielsweise die Assoziation des Urknalls mit einer Explosion, der Urknall verursacht durch eine Kollision von Teilchen oder größeren Objekten, oder die Vorstellung der Ausdehnung des Universums als neue Entdeckungen und/oder Wissen. Des Weiteren gab nur etwa jeder Fünfte das korrekte Alter des Universums oder die Ausdehnung des Universums als einen der drei Belege der Urknalltheorie an, während fast 40% keinen einzigen Beleg nennen konnten. Für den geschlossenen Fragebogen konnten gute Hinweise für verschiedene Validitätsaspekte herausgearbeitet werden und es existieren erste Hinweise darauf, dass der Fragebogen Wissenszuwachs messen kann und damit wahrscheinlich zur Untersuchung der Wirksamkeit von Lerneinheiten eingesetzt werden kann. Auch ein entsprechendes Modell zur Verständnisentwicklung der Ausdehnung des Universums zeigte sich vielversprechend.
Diese Arbeit liefert insgesamt einen Forschungsbeitrag zum Schülervorwissen und Vorstellungen in der Kosmologie und deren Large Scale Assessment. Dies eröffnet die Möglichkeit zukünftiger Forschungen im Bereich von Gruppenvergleichen insbesondere hinsichtlich objektiver Ländervergleiche sowie der Untersuchungen der Wirksamkeit von einzelnen Lerneinheiten als auch Vergleiche verschiedener Lerneinheiten untereinander.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
This paper presents a combination of R packages-user contributed toolkits written in a common core programming language-to facilitate the humanistic investigation of digitised, text-based corpora.Our survey of text analysis packages includes those of our own creation (cleanNLP and fasttextM) as well as packages built by other research groups (stringi, readtext, hyphenatr, quanteda, and hunspell). By operating on generic object types, these packages unite research innovations in corpus linguistics, natural language processing, machine learning, statistics, and digital humanities. We begin by extrapolating on the theoretical benefits of R as an elaborate gluing language for bringing together several areas of expertise and compare it to linguistic concordancers and other tool-based approaches to text analysis in the digital humanities. We then showcase the practical benefits of an ecosystem by illustrating how R packages have been integrated into a digital humanities project. Throughout, the focus is on moving beyond the bag-of-words, lexical frequency model by incorporating linguistically-driven analyses in research.
Ius emigrandi
(2019)
Models of ring current electron dynamics unavoidably contain uncertainties in boundary conditions, electric and magnetic fields, electron scattering rates, and plasmapause location. Model errors can accumulate with time and result in significant deviations of model predictions from observations. Data assimilation offers useful tools which can combine physics-based models and measurements to improve model predictions. In this study, we systematically analyze performance of the Kalman filter applied to a log-transformed convection model of ring current electrons and Van Allen Probe data. We consider long-term dynamics of mu = 2.3 MeV/G and K = 0.3 G(1/2) R-E electrons from 1 February 2013 to 16 June 2013. By using synthetic data, we show that the Kalman filter is capable of correcting errors in model predictions associated with uncertainties in electron lifetimes, boundary conditions, and convection electric fields. We demonstrate that reanalysis retains features which cannot be fully reproduced by the convection model such as storm-time earthward propagation of the electrons down to 2.5 R-E. The Kalman filter can adjust model predictions to satellite measurements even in regions where data are not available. We show that the Kalman filter can adjust model predictions in accordance with observations for mu = 0.1, 2.3, and 9.9 MeV/G and constant K = 0.3 G(1/2) R-E electrons. The results of this study demonstrate that data assimilation can improve performance of ring current models, better quantify model uncertainties, and help deeper understand the physics of the ring current particles.
Ring current electrons (1–100 keV) have received significant attention in recent decades, but many questions regarding their major transport and loss mechanisms remain open. In this study, we use the four‐dimensional Versatile Electron Radiation Belt code to model the enhancement of phase space density that occurred during the 17 March 2013 storm. Our model includes global convection, radial diffusion, and scattering into the Earth's atmosphere driven by whistler‐mode hiss and chorus waves. We study the sensitivity of the model to the boundary conditions, global electric field, the electric field associated with subauroral polarization streams, electron loss rates, and radial diffusion coefficients. The results of the code are almost insensitive to the model parameters above 4.5 RERE, which indicates that the general dynamics of the electrons between 4.5 RE and the geostationary orbit can be explained by global convection. We found that the major discrepancies between the model and data can stem from the inaccurate electric field model and uncertainties in lifetimes. We show that additional mechanisms that are responsible for radial transport are required to explain the dynamics of ≥40‐keV electrons, and the inclusion of the radial diffusion rates that are typically assumed in radiation belt studies leads to a better agreement with the data. The overall effect of subauroral polarization streams on the electron phase space density profiles seems to be smaller than the uncertainties in other input parameters. This study is an initial step toward understanding the dynamics of these particles inside the geostationary orbit.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
Fälle zum Zivilprozessrecht
(2019)
A rich literature links knowledge inputs with innovative outputs. However, most of what is known is restricted to manufacturing. This paper analyzes whether the three aspects involving innovative activity - R&D; innovative output; and productivity - hold for knowledge intensive services. Combining the models of Crepon et al. (1998) and of Ackerberg et al. (2015), allows for causal interpretation of the relationship between innovation output and labor productivity. We find that knowledge intensive services benefit from innovation activities in the sense that these activities causally increase their labor productivity. Moreover, the firm size advantage found for manufacturing in previous studies nearly disappears for knowledge intensive services.
Die eigene Tätigkeit reflexiv in den Blick zu nehmen ist eine wichtige Aufgabe von Lehrer*innen im Berufsalltag. Reflexionskompetenz ist eine entscheidende Voraussetzung, um für sich selbst und die Gestaltung von Lehr-Lernprozessen sinnvolle Schlüsse ziehen zu können. Dies ist ein Grund, warum schon während des Praxissemesters, in dem Studierende ihre ersten längeren praktischen Erfahrungen als Lehrkräfte sammeln, viel Wert auf die Förderung und Entwicklung reflexiver Fähigkeiten gelegt wird. Herr Thomas Auge widmet sich mit seiner Arbeit der zentralen Frage, inwieweit und in welcher Form Studierende bereits im Praxissemester sich und ihre eigene Arbeit reflexiv in den Blick nehmen. Als Datenmaterial dienten von Studierenden schriftlich angefertigte Wochenrückblicke, welche im Rahmen des Praxissemesters online auf einer der Universität Potsdam entwickelten Plattform (padup.uni-potsdam.de) angefertigt wurden. Die Daten wurden nach der Methode der Inhaltsanalyse qualitativ ausgewertet. Neben einer detaillierten fachlichen Auseinandersetzung mit dem Thema „Reflexionskompetenz“ bietet die Arbeit einen vertieften Einblick in die Art und Weise, wie Studierende auf ihren eigenen Unterricht schauen. Die Ergebnisse offenbaren einen weiteren Handlungsbedarf hinsichtlich der Förderung von Reflexionskompetenzen in der Lehramtsausbildung.
Multidrug resistant (MDR) Pseudomonas aeruginosa having strong biofilm potential and virulence factors are a serious threat for hospitalized patients having compromised immunity In this study, 34 P. aeruginosa isolates of human origin (17 MDR and 17 non-MDR clinical isolates) were checked for biofilm formation potential in enriched and minimal media. The biofilms were detected using crystal violet method and a modified software package of the automated VideoScan screening method. Cytotoxic potential of the isolates was also investigated on HepG2, LoVo and T24 cell lines using automated VideoScan technology. Pulse field gel electrophoresis revealed 10 PFGE types in MDR and 8 in non-MDR isolates. Although all isolates showed biofilm formation potential, strong biofilm formation was found more in enriched media than in minimal media. Eight MDR isolates showed strong biofilm potential in both enriched and minimal media by both detection methods. Strong direct correlation between crystal violet and VideoScan methods was observed in identifying strong biofilm forming isolates. High cytotoxic effect was observed by 4 isolates in all cell lines used while 6 other isolates showed high cytotoxic effect on T24 cell line only. Strong association of multidrug resistance was found with biofilm formation as strong biofilms were observed significantly higher in MDR isolates (p-value < 0.05) than non-MDR isolates. No significant association of cytotoxic potential with multidrug resistance or biofilm formation was found (p-value > 0.05). The MDR isolates showing significant cytotoxic effects and strong biofilm formation impose a serious threat for hospitalized patients with weak immune system.
General intelligence has a substantial genetic background in children, adolescents, and adults, but environmental factors also strongly correlate with cognitive performance as evidenced by a strong (up to one SD) increase in average intelligence test results in the second half of the previous century. This change occurred in a period apparently too short to accommodate radical genetic changes. It is highly suggestive that environmental factors interact with genotype by possible modification of epigenetic factors that regulate gene expression and thus contribute to individual malleability. This modification might as well be reflected in recent observations of an association between dopamine-dependent encoding of reward prediction errors and cognitive capacity, which was modulated by adverse life events.
Wealth and income distributions are known to feature country-specific Pareto exponents for their long power-law tails. To propose a rationale for this, we introduce an agent-based dynamic model and use Monte Carlo simulations to unveil the wealth distributions in closed and open economical systems. The standard money-exchange scenario is supplemented with the position-exchange agent dynamics that vitally affects the Pareto law. Specifically, in closed systems with position-exchange dynamics the power law changes to an exponential shape, while for open systems with traps the Pareto law remains valid.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007–2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash–Sutcliffe efficiency of 0.72 and a Kling–Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007-2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash-Sutcliffe efficiency of 0.72 and a Kling-Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
OpenForecast
(2019)
The development and deployment of new operational runoff forecasting systems are a strong focus of the scientific community due to the crucial importance of reliable and timely runoff predictions for early warnings of floods and flashfloods for local businesses and communities. OpenForecast, the first operational runoff forecasting system in Russia, open for public use, is presented in this study. We developed OpenForecast based only on open-source software and data-GR4J hydrological model, ERA-Interim meteorological reanalysis, and ICON deterministic short-range meteorological forecasts. Daily forecasts were generated for two basins in the European part of Russia. Simulation results showed a limited efficiency in reproducing the spring flood of 2019. Although the simulations managed to capture the timing of flood peaks, they failed in estimating flood volume. However, further implementation of the parsimonious data assimilation technique significantly alleviates simulation errors. The revealed limitations of the proposed operational runoff forecasting system provided a foundation to outline its further development and improvement.
OpenForecast
(2019)
The development and deployment of new operational runoff forecasting systems are a strong focus of the scientific community due to the crucial importance of reliable and timely runoff predictions for early warnings of floods and flashfloods for local businesses and communities. OpenForecast, the first operational runoff forecasting system in Russia, open for public use, is presented in this study. We developed OpenForecast based only on open-source software and data-GR4J hydrological model, ERA-Interim meteorological reanalysis, and ICON deterministic short-range meteorological forecasts. Daily forecasts were generated for two basins in the European part of Russia. Simulation results showed a limited efficiency in reproducing the spring flood of 2019. Although the simulations managed to capture the timing of flood peaks, they failed in estimating flood volume. However, further implementation of the parsimonious data assimilation technique significantly alleviates simulation errors. The revealed limitations of the proposed operational runoff forecasting system provided a foundation to outline its further development and improvement.
We construct eta- and rho-invariants for Dirac operators, on the universal covering of a closed manifold, that are invariant under the projective action associated to a 2-cocycle of the fundamental group. We prove an Atiyah-Patodi-Singer index theorem in this setting, as well as its higher generalisation. Applications concern the classification of positive scalar curvature metrics on closed spin manifolds. We also investigate the properties of these twisted invariants for the signature operator and the relation to the higher invariants.
We analyse the top tail of the wealth distribution in France, Germany, and Spain using the first and second waves of the Household Finance and Consumption Survey (HFCS). Since top wealth is likely to be under-represented in household surveys, we integrate big fortunes from rich lists, estimate a Pareto distribution, and impute the missing rich. In addition to the Forbes list, we rely on national rich lists since they represent a broader base of the big fortunes in those countries. As a result, the top 1% wealth share increases notably for the three selected countries after imputing the top wealth. We find that national rich lists can improve the estimation of the Pareto coefficient in particular when the list of national USD billionaires is short.
Double Jeopardy
(2019)
The present study investigates whether secondary traumatization (i.e., family history of Holocaust survival and secondary exposure to captivity) is implicated in subjective age. Women exposed to different levels of secondary traumatization (N = 177) were assessed. Analyses of variance (ANOVAs) revealed that a Holocaust background and husband's captivity had a marginally significant positive effect on age appearance. Women with a Holocaust background whose husbands were held captive reported older interest age, indicating double jeopardy for older subjective age when two sources of secondary traumatization are present. A similar trend existed for behavior age. Possible explanations for these complex findings of risk and resilience are discussed.
Übergänge im Bildungssystem sind zentrale Stationen für die Generierung von sozialer Ungleichheit. Während die Bildungswege und die Bedeutung der sozialen Ungleichheit für den Schulbereich umfangreich untersucht wurden, liegen kaum Studien zu den nachschulischen Bildungsverläufen von Hochschulzugangsberechtigten und dem Einfluss der sozialen Herkunft bis zur Aufnahme einer Promotion vor. Daher ist es das Ziel der vorliegenden Arbeit, die Gestaltung nachschulischer Bildungsverläufe zu untersuchen sowie die Bedeutung der sozialen Herkunft vom Abitur bis zur Promotionsaufnahme zu analysieren. Den beiden Forschungsfragen wurde in vier Teilstudien nachgegangen. In Teilstudie 1 wurde die Relevanz von Merkmalen des Bildungsverlaufes für die Promotionsaufnahme untersucht. Der Schwerpunkt der drei folgenden Teilstudien lag auf der Bedeutung der sozialen Herkunft bei Aufnahme einer Promotion beziehungsweise der sozialen Ungleichheit in den relevanten Selektionsstufen des nachschulischen Bildungsverlaufs bis zur Promotionsaufnahme. In Teilstudie 2 wurden diesbezüglich soziale Herkunftseffekte bei der für eine Promotionsaufnahme bedeutsamen Wahl der Hochschulform untersucht, in Teilstudie 3 die Mechanismen hinter sozialen Herkunftseffekten bei Promotionsaufnahme analysiert und in Teilstudie 4 wurde soziale Ungleichheit bei Studienaufnahme und Promotionsaufnahme vergleichend betrachtet. Als Datengrundlage wurde die Längsschnittstudie BIJU (Bildungsverläufe und psychosoziale Entwicklung im Jugend- und jungen Erwachsenenalter) herangezogen. Die Befunde der Dissertation verweisen auf die Relevanz sozialer Ungleichheiten vom Eintritt in die Hochschule bis zum Übergang in die Promotion. Auch wenn ein abnehmender Herkunftseffekt vom Übertritt ins Studium zum Übertritt in die Promotion vorliegt, sind soziale Herkunftseffekte bei dem späten Bildungsübergang noch sichtbar. Zudem zeigt sich die Bedeutung von Pfadabhängigkeiten in Bildungsverläufen sowie von Leistungsunterschieden für eine Promotionsaufnahme.
While the underlying mechanisms of Parkinson’s disease (PD) are still insufficiently studied, a complex interaction between genetic and environmental factors is emphasized. Nevertheless, the role of the essential trace element zinc (Zn) in this regard remains controversial. In this study we altered Zn balance within PD models of the versatile model organism Caenorhabditis elegans (C. elegans) in order to examine whether a genetic predisposition in selected genes with relevance for PD affects Zn homeostasis. Protein-bound and labile Zn species act in various areas, such as enzymatic catalysis, protein stabilization pathways and cell signaling. Therefore, total Zn and labile Zn were quantitatively determined in living nematodes as individual biomarkers of Zn uptake and bioavailability with inductively coupled plasma tandem mass spectrometry (ICP-MS/MS) or a multi-well method using the fluorescent probe ZinPyr-1. Young and middle-aged deletion mutants of catp-6 and pdr-1, which are orthologues of mammalian ATP13A2 (PARK9) and parkin (PARK2), showed altered Zn homeostasis following Zn exposure compared to wildtype worms. Furthermore, age-specific differences in Zn uptake were observed in wildtype worms for total as well as labile Zn species. These data emphasize the importance of differentiation between Zn species as meaningful biomarkers of Zn uptake as well as the need for further studies investigating the role of dysregulated Zn homeostasis in the etiology of PD.
Hot-electron-induced reactions are more and more recognized as a critical and ubiquitous reaction in heterogeneous catalysis. However, the kinetics of these reactions is still poorly understood, which is also due to the complexity of plasmonic nanostructures. We determined the reaction rates of the hot-electron-mediated reaction of 4-nitrothiophenol (NTP) on gold nanoparticles (AuNPs) using fractal kinetics as a function of the laser wavelength and compared them with the plasmonic enhancement of the system. The reaction rates can be only partially explained by the plasmonic response of the NPs. Hence, synchrotron X-ray photoelectron spectroscopy (XPS) measurements of isolated NTP-capped AuNP clusters have been performed for the first time. In this way, it was possible to determine the work function and the accessible valence band states of the NP systems. The results show that besides the plasmonic enhancement, the reaction rates are strongly influenced by the local density of the available electronic states of the system.
Sensors composed of a porous silicon monolayer covered with a film of nanostructured gold layer, which provide two optical signal transduction methods, are fabricated and thoroughly characterized concerning their sensing performance. For this purpose, silicon substrates were electrochemically etched in order to obtain porous silicon monolayers, which were subsequently immersed in gold salt solution facilitating the formation of a porous gold nanoparticle layer on top of the porous silicon. The deposition process was monitored by reflectance spectroscopy, and the appearance of a dip in the interference pattern of the porous silicon layer was observed. This dip can be assigned to the absorption of light by the deposited gold nanostructures leading to localized surface plasmon resonance. The bulk sensitivity of these sensors was determined by recording reflectance spectra in media having different refractive indices and compared to sensors exclusively based on porous silicon or gold nanostructures. A thorough analysis of resulting shifts of the different optical signals in the reflectance spectra on the wavelength scale indicated that the optical response of the porous silicon sensor is not influenced by the presence of a gold nanostructure on top. Moreover, the adsorption of thiol-terminated polystyrene to the sensor surface was solely detected by changes in the position of the dip in the reflectance spectrum, which is assigned to localized surface plasmon resonance in the gold nanostructures. The interference pattern resulting from the porous silicon layer is not shifted to longer wavelengths by the adsorption indicating the independence of the optical response of the two nanostructures, namely porous silicon and nanostructured gold layer, to refractive index changes and pointing to the successful realization of two sensors in one spot.
This study investigates the effect of different anticonsumption constructs on consumer wellbeing. The study assumes that people will only lower their level of consumption if doing so does not also lower personal wellbeing. More precisely, this research investigates how specific subtypes of sustainable anticonsumption (e.g., voluntary simplicity, collaborative consumption, and debt-free living) relate to different states of consumer's wellbeing (e.g., financial, psychosocial, and subjective wellbeing). This work also examines whether consumer empowerment can improve personal wellbeing and strengthen the anticonsumption wellbeing relationship. The results show that voluntarily foregoing consumption does not reduce wellbeing and consumer empowerment plays a significant role in supporting sustainable pathways to consumer wellbeing. This study reasons that empowerment improves consumer sovereignty, but may be detrimental for consumers heavily concerned about debt-free living. The present investigation concludes by proposing implications for public and consumer policymakers wishing to promote appropriate sustainable (anticonsumption) pathways to consumer wellbeing.
Quadruple-shape hydrogels
(2019)
The capability of directed movements by two subsequent shape changes could be implemented in shape-memory hydrogels by incorporation of two types of crystallizable side chains While in non-swollen polymer networks even more directed movements could be realized, the creation of multi-shape hydrogels is still a challenge. We hypothesize that a quadruple-shape effect in hydrogels can be realized, when a swelling capacity almost independent of temperature is generated, whereby directed movements could be enabled, which are not related to swelling. In this case, entropy elastic recovery could be realized by hydrophilic segments and the fixation of different macroscopic shapes by means of three semi-crystalline side chains generating temporary crosslinks. Monomethacrylated semi-crystalline oligomers were connected as side chains in a hydrophilic polymer network via radical copolymerization. Computer assisted modelling was utilized to design a demonstrator capable of complex shape shifts by creating a casting mold via 3D printing from polyvinyl alcohol. The demonstrator was obtained after copolymerization of polymer network forming components within the mold, which was subsequently dissolved in water. A thermally-induced quadruple-shape effect was realized after equilibrium swelling of the polymer network in water. Three directed movements were successfully obtained when the temperature was continuously increased from 5 degrees C to 90 degrees C with a recovery ratio of the original shape above 90%. Hence, a thermally-induced quadruple-shape effect as new record for hydrogels was realized. Here, the temperature range for the multi-shape effect was limited by water as swelling media (0 degrees C-100 degrees C), simultaneously distinctly separated thermal transitions were required, and the overall elasticity indispensable for successive deformations was reduced as result of partially chain segment orientation induced by swelling in water. Conclusively the challenges for penta- or hexa-shape gels are the design of systems enabling higher elastic deformability and covering a larger temperature range by switching to a different solvent.