Hybrid Open-Access
Refine
Has Fulltext
- no (378)
Year of publication
- 2022 (378) (remove)
Document Type
- Article (374)
- Monograph/Edited Volume (1)
- Conference Proceeding (1)
- Other (1)
- Review (1)
Is part of the Bibliography
- yes (378)
Keywords
- climate change (5)
- Biodiversity (3)
- COVID-19 (3)
- Climate change (3)
- Siberia (3)
- activity (3)
- analysis (3)
- machine learning (3)
- mental health (3)
- radiation belts (3)
Institute
- Institut für Physik und Astronomie (66)
- Institut für Biochemie und Biologie (55)
- Institut für Geowissenschaften (40)
- Institut für Chemie (39)
- Institut für Umweltwissenschaften und Geographie (22)
- Department Erziehungswissenschaft (18)
- Department Linguistik (17)
- Fachgruppe Politik- & Verwaltungswissenschaft (17)
- Fachgruppe Betriebswirtschaftslehre (13)
- Department Psychologie (11)
Deep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. This paper studies deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. The motivation for this is to better capture statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. The paper therefore proposes a distance metric based on Wasserstein distances between the distributions and a corresponding loss function for metric learning, which leads to a novel end-to-end trainable embedding model. We empirically observe that distributional embeddings outperform standard vector embeddings and that training with the proposed Wasserstein metric outperforms training with other distance functions.
Sharing marketplaces emerged as the new Holy Grail of value creation by enabling exchanges between strangers. Identity reveal, encouraged by platforms, cuts both ways: While inducing pre-transaction confidence, it is suspected of backfiring on the information senders with its discriminative potential. This study employs a discrete choice experiment to explore the role of names as signifiers of discriminative peculiarities and the importance of accompanying cues in peer choices of a ridesharing offer. We quantify users' preferences for quality signals in monetary terms and evidence comparative disadvantage of Middle Eastern descent male names for drivers and co-travelers. It translates into a lower willingness to accept and pay for an offer. Market simulations confirm the robustness of the findings. Further, we discover that females are choosier and include more signifiers of involuntary personal attributes in their decision-making. Price discounts and positive information only partly compensate for the initial disadvantage, and identity concealment is perceived negatively.
One for all, all for one
(2022)
We propose a conceptual model of acceptance of contact tracing apps based on the privacy calculus perspective. Moving beyond the duality of personal benefits and privacy risks, we theorize that users hold social considerations (i.e., social benefits and risks) that underlie their acceptance decisions. To test our propositions, we chose the context of COVID-19 contact tracing apps and conducted a qualitative pre-study and longitudinal quantitative main study with 589 participants from Germany and Switzerland. Our findings confirm the prominence of individual privacy calculus in explaining intention to use and actual behavior. While privacy risks are a significant determinant of intention to use, social risks (operationalized as fear of mass surveillance) have a notably stronger impact. Our mediation analysis suggests that social risks represent the underlying mechanism behind the observed negative link between individual privacy risks and contact tracing apps' acceptance. Furthermore, we find a substantial intention–behavior gap.
The degree of detrimental effects inflicted on mankind by the COVID-19 pandemic increased the need to develop ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, and Deliverable) POCT (point of care testing) to overcome the current and any future pandemics. Much effort in research and development is currently advancing the progress to overcome the diagnostic pressure built up by emerging new pathogens. LAMP (loop-mediated isothermal amplification) is a well-researched isothermal technique for specific nucleic acid amplification which can be combined with a highly sensitive immunochromatographic readout via lateral flow assays (LFA). Here we discuss LAMP-LFA robustness, sensitivity, and specificity for SARS-CoV-2 N-gene detection in cDNA and clinical swab-extracted RNA samples. The LFA readout is designed to produce highly specific results by incorporation of biotin and FITC labels to 11-dUTP and LF (loop forming forward) primer, respectively. The LAMP-LFA assay was established using cDNA for N-gene with an accuracy of 95.65%. To validate the study, 82 SARS-CoV-2-positive RNA samples were tested. Reverse transcriptase (RT)-LAMP-LFA was positive for the RNA samples with an accuracy of 81.66%; SARS-CoV-2 viral RNA was detected by RT-LAMP-LFA for as low as CT-33. Our method reduced the detection time to 15 min and indicates therefore that RT-LAMP in combination with LFA represents a promising nucleic acid biosensing POCT platform that combines with smartphone based semi-quantitative data analysis.
Sonority is a fundamental notion in phonetics and phonology, central to many descriptions of the syllable and various useful predictions in phonotactics. Although widely accepted, sonority lacks a clear basis in speech articulation or perception, given that traditional formal principles in linguistic theory are often exclusively based on discrete units in symbolic representation and are typically not designed to be compatible with auditory perception, sensorimotor control, or general cognitive capacities. In addition, traditional sonority principles also exhibit systematic gaps in empirical coverage. Against this backdrop, we propose the incorporation of symbol-based and signal-based models to adequately account for sonority in a complementary manner. We claim that sonority is primarily a perceptual phenomenon related to pitch, driving the optimization of syllables as pitch-bearing units in all language systems. We suggest a measurable acoustic correlate for sonority in terms of periodic energy, and we provide a novel principle that can account for syllabic well-formedness, the nucleus attraction principle (NAP). We present perception experiments that test our two NAP-based models against four traditional sonority models, and we use a Bayesian data analysis approach to test and compare them. Our symbolic NAP model outperforms all the other models we test, while our continuous bottom-up NAP model is at second place, along with the best performing traditional models. We interpret the results as providing strong support for our proposals: (i) the designation of periodic energy as the acoustic correlate of sonority; (ii) the incorporation of continuous entities in phonological models of perception; and (iii) the dual-model strategy that separately analyzes symbol-based top-down processes and signal-based bottom-up processes in speech perception.
We revisited 10 known exoplanetary systems using publicly available data provided by the transiting exoplanet survey satellite (TESS). The sample presented in this work consists of short period transiting exoplanets, with inflated radii and large reported uncertainty on their planetary radii. The precise determination of these values is crucial in order to develop accurate evolutionary models and understand the inflation mechanisms of these systems. Aiming to evaluate the planetary radius measurement, we made use of the planet-to-star radii ratio, a quantity that can be measured during a transit event. We fit the obtained transit light curves of each target with a detrending model and a transit model. Furthermore, we used emcee, which is based on a Markov chain Monte Carlo approach, to assess the best fit posterior distributions of each system parameter of interest. We refined the planetary radius of WASP-140 b by approximately 12%, and we derived a better precision on its reported asymmetric radius uncertainty by approximately 86 and 67%. We also refined the orbital parameters of WASP-120 b by 2 sigma. Moreover, using the high-cadence TESS datasets, we were able to solve a discrepancy in the literature, regarding the planetary radius of the exoplanet WASP-93 b. For all the other exoplanets in our sample, even though there is a tentative trend that planetary radii of (near-) grazing systems have been slightly overestimated in the literature, the planetary radius estimation and the orbital parameters were confirmed with independent observations from space, showing that TESS and ground-based observations are overall in good agreement.
Genetic engineering has provided humans the ability to transform organisms by direct manipulation of genomes within a broad range of applications including agriculture (e.g., GM crops), and the pharmaceutical industry (e.g., insulin production). Developments within the last 10 years have produced new tools for genome editing (e.g., CRISPR/Cas9) that can achieve much greater precision than previous forms of genetic engineering. Moreover, these tools could offer the potential for interventions on humans and for both clinical and non-clinical purposes, resulting in a broad scope of applicability. However, their promising abilities and potential uses (including their applicability in humans for either somatic or heritable genome editing interventions) greatly increase their potential societal impacts and, as such, have brought an urgency to ethical and regulatory discussions about the application of such technology in our society. In this article, we explore different arguments (pragmatic, sociopolitical and categorical) that have been made in support of or in opposition to the new technologies of genome editing and their impact on the debate of the permissibility or otherwise of human heritable genome editing interventions in the future. For this purpose, reference is made to discussions on genetic engineering that have taken place in the field of bioethics since the 1980s. Our analysis shows that the dominance of categorical arguments has been reversed in favour of pragmatic arguments such as safety concerns. However, when it comes to involving the public in ethical discourse, we consider it crucial widening the debate beyond such pragmatic considerations. In this article, we explore some of the key categorical as well sociopolitical considerations raised by the potential uses of heritable genome editing interventions, as these considerations underline many of the societal concerns and values crucial for public engagement. We also highlight how pragmatic considerations, despite their increasing importance in the work of recent authoritative sources, are unlikely to be the result of progress on outstanding categorical issues, but rather reflect the limited progress on these aspects and/or pressures in regulating the use of the technology.
Fragmentation of peptides leaves characteristic patterns in mass spectrometry data, which can be used to identify protein sequences, but this method is challenging for mutated or modified sequences for which limited information exist. Altenburg et al. use an ad hoc learning approach to learn relevant patterns directly from unannotated fragmentation spectra.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins.
Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge.
Here, to elevate unrestricted learning from spectra, we introduce 'ad hoc learning of fragmentation' (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments.
We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task.
Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
Steuern und Abgaben auf Produkte oder Verbrauch mit gesellschaftlichen Folgekosten (externe Kosten) – sogenannte Pigou- oder Lenkungssteuern – sind ein gesellschaftliches „Win-Win-Instrument“. Sie verbessern die Wohlfahrt und schützen gleichzeitig die Umwelt und das Klima. Dies wird erreicht, indem umweltschädigende Aktivitäten einen Preis bekommen, der möglichst exakt der Höhe des Schadens entspricht. Eine konsequente Bepreisung der externen Kosten nach diesem Prinzip könnte in Deutschland erhebliche zusätzliche Einnahmen erbringen: Basierend auf bisherigen Studien zu externen Kosten wären zusätzliche Einnahmen in der Größenordnung von 348 bis 564 Milliarden Euro pro Jahr (44 bis 71 Prozent der gesamten Steuereinnahmen) möglich. Die Autoren warnen allerdings, dass die Bezifferung der externen Kosten mit erheblichen Unsicherheiten verbunden ist. Damit Lenkungssteuern und -abgaben ihre positiven Lenkungs- und Wohlstandseffekte voll entfalten können, seien zudem institutionelle Reformen notwendig.
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions.