Refine
Year of publication
- 2024 (130)
- 2023 (1131)
- 2022 (2007)
- 2021 (2257)
- 2020 (2766)
- 2019 (2582)
- 2018 (2811)
- 2017 (2517)
- 2016 (2352)
- 2015 (2166)
- 2014 (1884)
- 2013 (2103)
- 2012 (1993)
- 2011 (2047)
- 2010 (1445)
- 2009 (1835)
- 2008 (1363)
- 2007 (1396)
- 2006 (1804)
- 2005 (1956)
- 2004 (2021)
- 2003 (1553)
- 2002 (1354)
- 2001 (1425)
- 2000 (1686)
- 1999 (1852)
- 1998 (1690)
- 1997 (1541)
- 1996 (1557)
- 1995 (1473)
- 1994 (1031)
- 1993 (405)
- 1992 (255)
- 1991 (169)
- 1990 (16)
- 1989 (28)
- 1988 (22)
- 1987 (23)
- 1986 (16)
- 1985 (12)
- 1984 (15)
- 1983 (31)
- 1982 (10)
- 1981 (9)
- 1980 (10)
- 1979 (15)
- 1978 (9)
- 1977 (12)
- 1976 (7)
- 1975 (3)
- 1974 (2)
- 1973 (2)
- 1972 (2)
- 1971 (2)
- 1970 (1)
- 1958 (1)
Document Type
- Article (35191)
- Doctoral Thesis (6463)
- Monograph/Edited Volume (5518)
- Postprint (3253)
- Review (2282)
- Part of a Book (996)
- Other (870)
- Preprint (566)
- Conference Proceeding (528)
- Part of Periodical (450)
- Master's Thesis (260)
- Working Paper (254)
- Habilitation Thesis (103)
- Report (54)
- Bachelor Thesis (47)
- Contribution to a Periodical (38)
- Course Material (30)
- Journal/Publication series (29)
- Lecture (10)
- Moving Images (7)
- Sound (2)
- Study Thesis (1)
Language
- English (30042)
- German (25810)
- Spanish (362)
- French (329)
- Italian (113)
- Russian (112)
- Multiple languages (65)
- Hebrew (36)
- Portuguese (25)
- Polish (24)
Keywords
- Germany (198)
- climate change (171)
- Deutschland (138)
- European Union (78)
- Sprachtherapie (77)
- machine learning (75)
- diffusion (74)
- Patholinguistik (73)
- morphology (73)
- patholinguistics (73)
Institute
- Institut für Biochemie und Biologie (5367)
- Institut für Physik und Astronomie (5325)
- Institut für Geowissenschaften (3548)
- Institut für Chemie (3442)
- Wirtschaftswissenschaften (2636)
- Historisches Institut (2483)
- Department Psychologie (2319)
- Institut für Mathematik (2131)
- Institut für Romanistik (2105)
- Sozialwissenschaften (1882)
Parkinson's disease (PD) shows high heterogeneity with regard to the underlying molecular pathogenesis involving multiple pathways and mechanisms. Diagnosis is still challenging and rests entirely on clinical features. Thus, there is an urgent need for robust diagnostic biofluid markers. Untargeted metabolomics allows establishing low-molecular compound biomarkers in a wide range of complex diseases by the measurement of various molecular classes in biofluids such as blood plasma, serum, and cerebrospinal fluid (CSF). Here, we applied untargeted high-resolution mass spectrometry to determine plasma and CSF metabolite profiles. We semiquantitatively determined small-molecule levels (<= 1.5 kDa) in the plasma and CSF from early PD patients (disease duration 0-4 years; n = 80 and 40, respectively), and sex-and age-matched controls (n = 76 and 38, respectively). We performed statistical analyses utilizing partial least square and random forest analysis with a 70/30 training and testing split approach, leading to the identification of 20 promising plasma and 14 CSF metabolites. The semetabolites differentiated the test set with an AUC of 0.8 (plasma) and 0.9 (CSF). Characteristics of the metabolites indicate perturbations in the glycerophospholipid, sphingolipid, and amino acid metabolism in PD, which underscores the high power of metabolomic approaches. Further studies will enable to develop a potential metabolite-based biomarker panel specific for PD
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
The spin probes 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO), 4-hydroxy-2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPOL), and 2,2,6,6-tetramethyl-4-trimethylammoniumpiperidine-1-oxyllodide (CAT-1) are examined in a number of ionic liquids based on substituted imidazolium cations and tetrafluoroborate and hexafluorophosphate anions, respectively. The reorientation correlation times tau(R) of the spin probes in these systems have been determined by complete spectra simulation and, for rapid reortientation, by analysis of the intensities of the hyperfine lines of the electron spin resonance (ESR) spectra. A comparison of the results with those from the model system glycerol/water and selected organic solvents is made. Additions of diamagnetic and paramagnetic ions allow the conclusion that salt effects and spin exchange are present, and that both are superimposed by motional effects. Specific interactions in the ionic liquids, as well as between the spin-probe molecules and the constituents of the ionic liquids are reflected in the spectra of the spin probes, depending on their molecular structure
And/Or reasoning graphs for determining prime implicants in multi-level combinational networks
(1997)
Syntheses of thiazolidine-fused heterocycles via exo-mode cyclizations of vinylogous N-acyliminium ions incorporating heteroatom-based nucleophiles have been examined and discussed. The formation of (5,6)-membered systems was feasible with all nucleophiles tried (O, S and N), while the closing of the five-membered ring was restricted to O- and S-nucleophiles. The closure of a four-membered ring failed. Instead, the bicyclic (5,6)-membered acetal derivative and the tricyclic system with an eight-membered central ring were obtained from the substrates containing O and S nucleophilic moieties, respectively. The reaction outcome and stereochemistry are rationalized using quantum chemical calculations at B3LYP/6-31G(d) level. The exclusive cis-stereoselectivity in the formation of (5,6)- and (5,5)-membered systems results from thermodynamic control, whereas the formation of the eight-membered ring was kinetically controlled.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
Electrosynthesized molecularly imprinted polyscopoletin nanofilms for human serum albumin detection
(2017)
Molecularly imprinted polymers (MIPs) rendered selective solely by the imprinting with protein templates lacking of distinctive properties to facilitate strong target-MIP interaction are likely to exhibit medium to low template binding affinities. While this prohibits the use of such MIPs for applications requiring the assessment of very low template concentrations, their implementation for the quantification of high-abundance proteins seems to have a clear niche in the analytical practice. We investigated this opportunity by developing a polyscopoletin-based MIP nanofilm for the electrochemical determination of elevated human serum albumin (HSA) in urine. As reference for a low abundance protein ferritin-MIPs were also prepared by the same procedure. Under optimal conditions, the imprinted sensors gave a linear response to HSA in the concentration range of 20-100 mg/dm(3), and to ferritin in the range of 120-360 mg/dm(3). While as expected the obtained limit of detection was not sufficient to determine endogenous ferritin in plasma, the HSA-sensor was successfully employed to analyse urine samples of patients with albuminuria. The results suggest that MIP-based sensors may be applicable for quantifying high abundance proteins in a clinical setting. (c) 2017 Elsevier B.V. All rights reserved.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness.
This paper employs a complex network approach to determine the topology and evolution of the network of extreme precipitation that governs the organization of extreme rainfall before, during, and after the Indian Summer Monsoon (ISM) season. We construct networks of extreme rainfall events during the ISM (June-September), post-monsoon (October-December), and pre-monsoon (March-May) periods from satellite-derived (Tropical Rainfall Measurement Mission, TRMM) and rain-gauge interpolated (Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources, APHRODITE) data sets. The structure of the networks is determined by the level of synchronization of extreme rainfall events between different grid cells throughout the Indian subcontinent. Through the analysis of various complex-network metrics, we describe typical repetitive patterns in North Pakistan (NP), the Eastern Ghats (EG), and the Tibetan Plateau (TP). These patterns appear during the pre-monsoon season, evolve during the ISM, and disappear during the post-monsoon season. These are important meteorological features that need further attention and that may be useful in ISM timing and strength prediction.
Forecasting the onset and withdrawal of the Indian summer monsoon is crucial for the life and prosperity of more than one billion inhabitants of the Indian subcontinent. However, accurate prediction of monsoon timing remains a challenge, despite numerous efforts. Here we present a method for prediction of monsoon timing based on a critical transition precursor. We identify geographic regions-tipping elements of the monsoon-and use them as observation locations for predicting onset and withdrawal dates. Unlike most predictability methods, our approach does not rely on precipitation analysis but on air temperature and relative humidity, which are well represented both in models and observations. The proposed method allows to predict onset 2 weeks earlier and withdrawal dates 1.5 months earlier than existing methods. In addition, it enables to correctly forecast monsoon duration for some anomalous years, often associated with El Nino-Southern Oscillation.
The aim of this work is the study of silica Arrayed Waveguide Gratings (AWGs) in the context of applications in astronomy. The specific focus lies on the investigation of the feasibility and technology limits of customized silica AWG devices for high resolution near-infrared spectroscopy. In a series of theoretical and experimental studies, AWG devices of varying geometry, foot-print and spectral resolution are constructed, simulated using a combination of a numerical beam propagation method and Fraunhofer diffraction and fabricated devices are characterized with respect to transmission efficiency, spectral resolution and polarization sensitivity. The impact of effective index non-uniformities on the performance of high-resolution AWG devices is studied numerically. Characterization results of fabricated devices are used to extrapolate the technology limits of the silica platform. The important issues of waveguide birefringence and defocus aberration are discussed theoretically and addressed experimentally by selection of an appropriate aberration-minimizing anastigmatic AWG layout structure. The drawbacks of the anastigmatic AWG geometry are discussed theoretically. From the results of the experimental studies, it is concluded that fabrication-related phase errors and waveguide birefringence are the primary limiting factors for the growth of AWG spectral resolution. It is shown that, without post-processing, the spectral resolving power is phase-error-limited to R < 40, 000 and, in the case of unpolarized light, birefringence-limited to R < 30, 000 in the AWG devices presented in this work. Necessary measures, such as special waveguide geometries and post-fabrication phase error correction are proposed for future designs. The elimination of defocus aberration using an anastigmatic AWG geometry is successfully demonstrated in experiment. Finally, a novel, non-planar dispersive in-fibre waveguide structure is proposed, discussed and studied theoretically.
Professionelle GT Langstreckenmotorsportler (Rennfahrer) müssen den hohen motorischen und kognitiven Ansprüchen ohne Verlust der Performance während eines Rennens endgegenwirken können. Sie müssen stets, bei hoher Geschwindigkeit fokussiert und konzentriert auf ihr Auto, die Rennstrecke und ihre Gegner reagieren können. Darüber hinaus sind Rennfahrer zusätzlich durch die notwendige Kommunikation im Auto mit den Ingenieuren und Mechanikern in der Boxengasse gefordert. Daten über die tatsächliche Beanspruchung und häufig auftretende Beschwerden und/oder Verletzung von Profiathleten liegen kaum vor. Für eine möglichst gute Performance im Auto während eines Rennens ist es notwendige neben der körperlichen Beanspruchung auch die häufigen Krankheitsbilder zu kennen. Auf Basis dessen kann eine optimale Prävention oder notwendige Therapie zur möglichst schnellen Reintegration in den Sport abgeleitet und entwickelt werden. Die vorliegende Arbeit befasst sich durch ein regelmäßiges Gesundheitsmonitoring mit der Erfassung häufiger Beschwerden und oder Verletzungen im GT Langestreckenmotorsport zur Ableitung eines präventiven (trainingstherapeutischen) und therapeutischen Konzeptes. Darüber hinaus, soll über die Einschätzung der körperlichen Leistungsfähigkeit der Athleten, auf Basis der Beanspruchung im Rennfahrzeug ein mögliches Trainingskonzept in Abhängigkeit der Saison entwickelt werden.
Insgesamt wurden über 15 Jahre (2003-2017) 37 männliche Athleten aus dem GT Langstreckenmotorsport 353mal im Rahmen eines Gesundheitsmonitorings untersucht. Dabei wurden Athleten maximal 14 Jahre und mindestens 1 Jahr sportmedizinische betreut. Diese 2x im Jahr stattfindende Untersuchung beinhaltete im Wesentlichen eine sportmedizinische Untersuchung zur Einschätzung der Tauglichkeit für den Sport und die Erfassung der körperlichen Leistungsfähigkeit. Über das Gesundheitsmonitoring hinaus erfolgte die Betreuung zusätzlich an der Rennstrecke zur weiteren Erfassung der Beschwerden, Erkrankungen und Verletzungen der Athleten während ihrer sportartspezifischen Belastung. Zusammengefasst zeigen die Athleten geringe Prävalenzen und Inzidenzen der Krankheitsbilder bzw. Beschwerden. Ein Unterschied der Prävalenzen zeigt sich zwischen den Gesundheitsuntersuchungen und der Betreuung an der Rennstrecke. Die häufigsten Beschwerdebilder zeigen sich aus Orthopädie und Innerer Medizin. So sind Infekte der oberen Atemwege sowie Allergien neben Beschwerden der unteren Extremität und der Wirbelsäule am häufigsten. Demzufolge werden vorrangig physio- und trainingstherapeutische Konsequenzen abgeleitet. Eine medikamentöse Therapie erfolgt im Wesentlichen während der Rennbetreuung. Zur Reduktion der orthopädischen und internistischen Beschwerden sollten präventive Maßnahmen mehr betont werden. Die körperliche Leistungsfähigkeit zeigt im Wesentlichen über die Untersuchungsjahre eine stabile Performance für die Ausdauer-, Kraft und sensomotorische Leistungsfähigkeit. Die Ausdauerleistungsfähigkeit kann in Abhängigkeit der Sportartspezifik mit einer guten bis sehr guten Ausprägung definiert werden. Die Kraftleistungsfähigkeit und die sensomotorische Leistungsfähigkeit lassen sportartspezifische Unterschiede zu und sollte körpergewichtsbezogen betrachtet werden.
Ein sportmedizinisches und trainingstherapeutisches Konzept müsste demnach eine regelmäßige ärztlich-medizinische Untersuchung mit dem Fokus der Orthopädie, Inneren Medizin und Hals- Nasen-Ohren-Kunde beinhalten. Darüber hinaus sollte eine regelmäßige Erfassung der körperlichen Leistungsfähigkeit zur möglichst effektiven Ableitung von Trainingsinhalten oder Präventionsmaßnahmen berücksichtig werden. Auf Grundlage der hohen Reisetätigkeit und der ganzjährigen Saison könnte ein 1-2x jährlich stattfindendes Trainingslager, im Sinne eines Grundlagen- und Aufbautrainings zur Optimierung der Leistungsfähigkeit beitragen, das Konzept komplementieren. Zudem scheint eine ärztliche Rennbetreuung notwendig.
Giuseppe Prezzolini
(2019)
Der Journalist Giuseppe Prezzolini (1882–1982) gehört zu den prägenden italienischen Intellektuellen des 20. Jahrhunderts. Die von ihm begründete Kulturzeitschrift »La Voce« bot einflussreichen Stimmen der Zeit eine Bühne, darunter Giovanni Gentile, Benedetto Croce oder Benito Mussolini. Durch seine publizistische Arbeit avancierte er zu einem festen intellektuellen Bezugspunkt konservativer Kreise Italiens. Seine Forderungen u. a. nach einer Neugründung des italienischen Konservatismus abseits neofaschistischer Ideen begründeten seinen umstrittenen Ruf als Antikonformist.
Fluvial terraces, floodplains, and alluvial fans are the main landforms to store sediments and to decouple hillslopes from eroding mountain rivers. Such low-relief landforms are also preferred locations for humans to settle in otherwise steep and poorly accessible terrain. Abundant water and sediment as essential sources for buildings and infrastructure make these areas amenable places to live at. Yet valley floors are also prone to rare and catastrophic sedimentation that can overload river systems by abruptly increasing the volume of sediment supply, thus causing massive floodplain aggradation, lateral channel instability, and increased flooding. Some valley-fill sediments should thus record these catastrophic sediment pulses, allowing insights into their timing, magnitude, and consequences.
This thesis pursues this theme and focuses on a prominent ~150 km2 valley fill in the Pokhara Valley just south of the Annapurna Massif in central Nepal. The Pokhara Valley is conspicuously broad and gentle compared to the surrounding dissected mountain terrain,
and is filled with locally more than 70 m of clastic debris. The area’s main river, Seti Khola, descends from the Annapurna Sabche Cirque at 3500-4500 m asl down to 900 m asl where it incises into this valley fill. Humans began to settle on this extensive
fan surface in the 1750’s when the Trans-Himalayan trade route connected the Higher Himalayas, passing Pokhara city, with the subtropical lowlands of the Terai. High and unstable river terraces and steep gorges undermined by fast flowing rivers with highly seasonal (monsoon-driven) discharge, a high earthquake risk, and a growing population make the Pokhara Valley an ideal place to study the recent geological and geomorphic history of its sediments and the implication for natural hazard appraisals.
The objective of this thesis is to quantify the timing, the sedimentologic and geomorphic processes as well as the fluvial response to a series of strong sediment pulses. I report
diagnostic sedimentary archives, lithofacies of the fan terraces, their geochemical provenance, radiocarbon-age dating and the stratigraphic relationship between them. All these various and independent lines of evidence show consistently that multiple sediment pulses filled the Pokhara Valley in medieval times, most likely in connection with, if not triggered by, strong seismic ground shaking. The geomorphic and sedimentary evidence is
consistent with catastrophic fluvial aggradation tied to the timing of three medieval Himalayan earthquakes in ~1100, 1255, and 1344 AD. Sediment provenance and calibrated radiocarbon-age data are the key to distinguish three individual sediment pulses, as these are not evident from their sedimentology alone. I explore various measures of adjustment and fluvial response of the river system following these massive aggradation pulses. By using proxies such as net volumetric erosion, incision and erosion rates, clast provenance on active river banks, geomorphic markers such as re-exhumed tree trunks in growth position, and knickpoint locations in tributary valleys, I estimate the response of the river network in the Pokhara Valley to earthquake disturbance over several centuries. Estimates of the removed volumes since catastrophic valley filling began, require average net sediment
yields of up to 4200 t km−2 yr−1 since, rates that are consistent with those reported for Himalayan rivers. The lithological composition of active channel-bed load differs from that of local bedrock material, confirming that rivers have adjusted 30-50% depending on data of different tributary catchments, locally incising with rates of 160-220 mm yr−1. In many tributaries to the Seti Khola, most of the contemporary river loads come from a Higher Himalayan source, thus excluding local hillslopes as sources. This imbalance in sediment provenance emphasizes how the medieval sediment pulses must have rapidly traversed up to 70 km downstream to invade the downstream reaches of the tributaries
up to 8 km upstream, thereby blocking the local drainage and thus reinforcing, or locally creating new, floodplain lakes still visible in the landscape today.
Understanding the formation, origin, mechanism and geomorphic processes of this valley fill is crucial to understand the landscape evolution and response to catastrophic sediment pulses. Several earthquake-triggered long-runout rock-ice avalanches or catastrophic dam burst in the Higher Himalayas are the only plausible mechanisms to explain both the geomorphic and sedimentary legacy that I document here. In any case, the Pokhara Valley was most likely hit by a cascade of extremely rare processes over some two centuries starting in the early 11th century. Nowhere in the Himalayas do we find valley fills of
comparable size and equally well documented depositional history, making the Pokhara Valley one of the most extensively dated valley fill in the Himalayas to date. Judging from the growing record of historic Himalayan earthquakes in Nepal that were traced and
dated in fault trenches, this thesis shows that sedimentary archives can be used to directly aid reconstructions and predictions of both earthquake triggers and impacts from a sedimentary-response perspective. The knowledge about the timing, evolution, and response of the Pokhara Valley and its river system to earthquake triggered sediment pulses is important to address the seismic and geomorphic risk for the city of Pokhara. This
thesis demonstrates how geomorphic evidence on catastrophic valley infill can help to independently verify paleoseismological fault-trench records and may initiate re-thinking on post-seismic hazard assessments in active mountain regions.
The use of topographic metrics for estimating the susceptibility to, and reconstructing the characteristics of, debris flows has a long research tradition, although largely devoted to humid mountainous terrain. The exceptional 2010 monsoonal rainstorms in the high-altitude mountain desert of Ladakh and Zanskar, NW India, were a painful reminder of how susceptible arid regions are to rainfall-triggered flash floods, landslides, and debris flows. The rainstorms of August 4-6 triggered numerous debris flows, killing 182 people, devastating 607 houses, and more than 10 bridges around Ladakh's capital of Leh. The lessons from this disaster motivated us to revisit methods of predicting (a) flow parameters such as peak discharge and maximum velocity from field and remote sensing data, and (b) the susceptibility to debris flows from catchment morphometry. We focus on quantifying uncertainties tied to these approaches. Comparison of high-resolution satellite images pre- and post-dating the 2010 rainstorm reveals the extent of damage and catastrophic channel widening. Computations based on these geomorphic markers indicate maximum flow velocities of 1.6-6.7 m s(-1) with runout of up to similar to 10 km on several alluvial fans that sustain most of the region's settlements. We estimate median peak discharges of 310-610 m(3) s(-1), which are largely consistent with previous estimates. Monte Carlo-based error propagation for a single given flow-reconstruction method returns a variance in discharge similar to one derived from juxtaposing several different flow reconstruction methods. We further compare discriminant analysis, classification tree modelling, and Bayesian logistic regression to predict debris-flow susceptibility from morphometric variables of 171 catchments in the Ladakh Range. These methods distinguish between fluvial and debris flow-prone catchments at similar success rates, but Bayesian logistic regression allows quantifying uncertainties and relationships between potential predictors. We conclude that, in order to be robust and reliable, morphometric reconstruction of debris-flow properties and susceptibility requires careful assessment and reporting of errors and uncertainties. (C) 2015 Elsevier B.V. All rights reserved.
Mountain rivers respond to strong earthquakes by rapidly aggrading to accommodate excess sediment delivered by co-seismic landslides. Detailed sediment budgets indicate that rivers need several years to decades to recover from seismic disturbances, depending on how recovery is defined. We examine three principal proxies of river recovery after earthquake-induced sediment pulses around Pokhara, Nepal's second largest city. Freshly exhumed cohorts of floodplain trees in growth position indicate rapid and pulsed sedimentation that formed a fan covering 150 km2 in a Lesser Himalayan basin with tens of metres of debris between the 11th and 15th centuries AD. Radiocarbon dates of buried trees are consistent with those of nearby valley deposits linked to major medieval earthquakes, such that we can estimate average rates of re-incision since. We combine high-resolution digital elevation data, geodetic field surveys, aerial photos, and dated tree trunks to reconstruct geomorphic marker surfaces. The volumes of sediment relative to these surfaces require average net sediment yields of up to 4200 t km–2 yr–1 for the 650 years since the last inferred earthquake-triggered sediment pulse. The lithological composition of channel bedload differs from that of local bedrock, confirming that rivers are still mostly evacuating medieval valley fills, locally incising at rates of up to 0.2 m yr–1. Pronounced knickpoints and epigenetic gorges at tributary junctions further illustrate the protracted fluvial response; only the distal portions of the earthquake-derived sediment wedges have been cut to near their base. Our results challenge the notion that mountain rivers recover speedily from earthquakes within years to decades. The valley fills around Pokhara show that even highly erosive Himalayan rivers may need more than several centuries to adjust to catastrophic perturbations. Our results motivate some rethinking of post-seismic hazard appraisals and infrastructural planning in active mountain regions.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10-15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10–15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Taking a new perspective
(2016)
Network analysis has attracted significant attention when researching the phenomenon of transnational terrorism, particularly Al Qaeda. While many scholars have made valuable contributions to mapping Al Qaeda, several problems remain due to a lack of data and the omission of data provided by international organizations such as the UN. Thus, this article applies a social network analysis and subsequent mappings of the data gleaned from the Security Council's consolidated sanctions list, and asks what they can demonstrate about the structure and organizational characteristics of Al Qaeda. The study maps the Al Qaeda network on a large scale using a newly compiled data set. The analysis reveals that the Al Qaeda network consists of several hundred individual and group nodes connecting almost all over the globe. Several major nodes are crucial for the network structure, while simultaneously many other nodes only weakly and foremost regionally connect to the network. The article concludes that the findings tie in well to the latest research pointing to local and simultaneously global elements of Al Qaeda, and that the new data is a valuable source for further analyses, potentially in combination with other data.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
The competition between charge extraction and nongeminate recombination critically determines the current-voltage characteristics of organic solar cells (OSCs) and their fill factor. As a measure of this competition, several figures of merit (FOMs) have been put forward; however, the impact of space charge effects has been either neglected, or not specifically addressed. Here we revisit recently reported FOMs and discuss the role of space charge effects on the interplay between recombination and extraction. We find that space charge effects are the primary cause for the onset of recombination in so-called non-Langevin systems, which also depends on the slower carrier mobility and recombination coefficient. The conclusions are supported with numerical calculations and experimental results of 25 different donor/acceptor OSCs with different charge transport parameters, active layer thicknesses or composition ratios. The findings represent a conclusive understanding of bimolecular recombination for drift dominated photocurrents and allow one to minimize these losses for given device parameters.
Charge transport layers (CTLs) are key components of diffusion controlled perovskite solar cells, however, they can induce additional non-radiative recombination pathways which limit the open circuit voltage (V-OC) of the cell. In order to realize the full thermodynamic potential of the perovskite absorber, both the electron and hole transport layer (ETL/HTL) need to be as selective as possible. By measuring the photoluminescence yield of perovskite/CTL heterojunctions, we quantify the non-radiative interfacial recombination currents in pin- and nip-type cells including high efficiency devices (21.4%). Our study comprises a wide range of commonly used CTLs, including various hole-transporting polymers, spiro-OMeTAD, metal oxides and fullerenes. We find that all studied CTLs limit the V-OC by inducing an additional non-radiative recombination current that is in most cases substantially larger than the loss in the neat perovskite and that the least-selective interface sets the upper limit for the V-OC of the device. Importantly, the V-OC equals the internal quasi-Fermi level splitting (QFLS) in the absorber layer only in high efficiency cells, while in poor performing devices, the V-OC is substantially lower than the QFLS. Using ultraviolet photoelectron spectroscopy and differential charging capacitance experiments we show that this is due to an energy level mis-alignment at the p-interface. The findings are corroborated by rigorous device simulations which outline important considerations to maximize the V-OC. This work highlights that the challenge to suppress non-radiative recombination losses in perovskite cells on their way to the radiative limit lies in proper energy level alignment and in suppression of defect recombination at the interfaces.
Perovskite photovoltaic (PV) cells have demonstrated power conversion efficiencies (PCE) that are close to those of monocrystalline silicon cells; however, in contrast to silicon PV, perovskites are not limited by Auger recombination under 1-sun illumination. Nevertheless, compared to GaAs and monocrystalline silicon PV, perovskite cells have significantly lower fill factors due to a combination of resistive and non-radiative recombination losses. This necessitates a deeper understanding of the underlying loss mechanisms and in particular the ideality factor of the cell. By measuring the intensity dependence of the external open-circuit voltage and the internal quasi-Fermi level splitting (QFLS), the transport resistance-free efficiency of the complete cell as well as the efficiency potential of any neat perovskite film with or without attached transport layers are quantified. Moreover, intensity-dependent QFLS measurements on different perovskite compositions allows for disentangling of the impact of the interfaces and the perovskite surface on the non-radiative fill factor and open-circuit voltage loss. It is found that potassium-passivated triple cation perovskite films stand out by their exceptionally high implied PCEs > 28%, which could be achieved with ideal transport layers. Finally, strategies are presented to reduce both the ideality factor and transport losses to push the efficiency to the thermodynamic limit.
Perovskite photovoltaic (PV) cells have demonstrated power conversion efficiencies (PCE) that are close to those of monocrystalline silicon cells; however, in contrast to silicon PV, perovskites are not limited by Auger recombination under 1-sun illumination. Nevertheless, compared to GaAs and monocrystalline silicon PV, perovskite cells have significantly lower fill factors due to a combination of resistive and non-radiative recombination losses. This necessitates a deeper understanding of the underlying loss mechanisms and in particular the ideality factor of the cell. By measuring the intensity dependence of the external open-circuit voltage and the internal quasi-Fermi level splitting (QFLS), the transport resistance-free efficiency of the complete cell as well as the efficiency potential of any neat perovskite film with or without attached transport layers are quantified. Moreover, intensity-dependent QFLS measurements on different perovskite compositions allows for disentangling of the impact of the interfaces and the perovskite surface on the non-radiative fill factor and open-circuit voltage loss. It is found that potassium-passivated triple cation perovskite films stand out by their exceptionally high implied PCEs > 28%, which could be achieved with ideal transport layers. Finally, strategies are presented to reduce both the ideality factor and transport losses to push the efficiency to the thermodynamic limit.
Flexible all-perovskite tandem photovoltaics open up new opportunities for application compared to rigid devices, yet their performance lags behind. Now, researchers show that molecule-bridged interfaces mitigate charge recombination and crack formation, improving the efficiency and mechanical reliability of flexible devices.
Optimizing the photoluminescence (PL) yield of a solar cell has long been recognized as a key principle to maximize the power conversion efficiency. While PL measurements are routinely applied to perovskite films and solar cells under open circuit conditions (V-OC), it remains unclear how the emission depends on the applied voltage. Here, we performed PL(V) measurements on perovskite cells with different hole transport layer thicknesses and doping concentrations, resulting in remarkably different fill factors (FFs). The results reveal that PL(V) mirrors the current-voltage (JV) characteristics in the power-generating regime, which highlights an interesting correlation between radiative and nonradiative recombination losses. In particular, high FF devices show a rapid quenching of PL(V) from open-circuit to the maximum power point. We conclude that, while the PL has to be maximized at V-OC at lower biases < V-OC the PL must be rapidly quenched as charges need to be extracted prior to recombination.
Perovskite solar cells now compete with their inorganic counterparts in terms of power conversion efficiency, not least because of their small open-circuit voltage (V-OC) losses. A key to surpass traditional thin-film solar cells is the fill factor (FF). Therefore, more insights into the physical mechanisms that define the bias dependence of the photocurrent are urgently required. In this work, we studied charge extraction and recombination in efficient triple cation perovskite solar cells with undoped organic electron/hole transport layers (ETL/HTL). Using integral time of flight we identify the transit time through the HTL as the key figure of merit for maximizing the fill factor (FF) and efficiency. Complementarily, intensity dependent photocurrent and V-OC measurements elucidate the role of the HTL on the bias dependence of non-radiative and transport-related loss channels. We show that charge transport losses can be completely avoided under certain conditions, yielding devices with FFs of up to 84%. Optimized cells exhibit power conversion efficiencies of above 20% for 6 mm(2) sized pixels and 18.9% for a device area of 1 cm(2). These are record efficiencies for hybrid perovskite devices with dopant-free transport layers, highlighting the potential of this device technology to avoid charge-transport limitations and to approach the Shockley-Queisser limit.
The performance of perovskite solar cells is predominantly limited by non-radiative recombination, either through trap-assisted recombination in the absorber layer or via minority carrier recombination at the perovskite/transport layer interfaces. Here, we use transient and absolute photoluminescence imaging to visualize all non-radiative recombination pathways in planar pintype perovskite solar cells with undoped organic charge transport layers. We find significant quasi-Fermi-level splitting losses (135 meV) in the perovskite bulk, whereas interfacial recombination results in an additional free energy loss of 80 meV at each individual interface, which limits the open-circuit voltage (V-oc) of the complete cell to similar to 1.12 V. Inserting ultrathin interlayers between the perovskite and transport layers leads to a substantial reduction of these interfacial losses at both the p and n contacts. Using this knowledge and approach, we demonstrate reproducible dopant-free 1 cm(2) perovskite solar cells surpassing 20% efficiency (19.83% certified) with stabilized power output, a high V-oc (1.17 V) and record fill factor (>81%).
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
Deep hydrothermal Mo, W, and base metal mineralization at the Sweet Home mine (Detroit City portal) formed in response to magmatic activity during the Oligocene. Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite suggest that the early-stage mineralization at the Sweet Home mine precipitated from low- to medium-salinity (1.5-11.5 wt% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415 degrees C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by delta H-2(w)-delta O-18(w) relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home mine was triggered by a deep-seated magmatic intrusion. The findings of this study are in good agreement with the results of previous fluid inclusion studies of the mineralization of the Sweet Home mine and from Climax-type Mo porphyry deposits in the Colorado Mineral Belt.
The Schwarzenberg mining district in the western Erzgebirge hosts numerous skarn-hosted tin-polymetallic deposits, such as Breitenbrunn. The St. Christoph mine is located in the Breitenbrunn deposit and is the locus typicus of christophite, an iron-rich sphalerite variety, which can be associated with indium enrichment. This study presents a revision of the paragenetic scheme, a contribution to the indium behavior and potential, and discussion on the origin of the sulfur. This was achieved through reflected light microscopy, SEM-based MLA, EPMA, and bulk mineral sulfur isotope analysis on 37 sulfide-rich skarn samples from a mineral collection. The paragenetic scheme includes: a pre-mineralization stage of anhydrous calc-silicates and hydrous minerals; an oxide stage, dominated by magnetite; a sulfide stage of predominantly sphalerite, minor pyrite, chalcopyrite, arsenopyrite, and galena. Some sphalerite samples present elevated indium contents of up to 0.44 wt%. Elevated iron contents (4-10 wt%) in sphalerite can be tentatively linked to increased indium incorporation, but further analyses are required. Analyzed sulfides exhibit homogeneous delta S-34 values (-1 to +2 parts per thousand VCDT), assumed to be post-magmatic. They correlate with other Fe-Sn-Zn-Cu-In skarn deposits in the western Erzgebirge, and Permian vein-hosted associations throughout the Erzgebirge region.
Der Bittergeschmack warnt den Organismus vor potentiell verdorbener oder giftiger Nahrung und ist somit ein wichtiger Kontrollmechanismus. Die initiale Detektion der zahlreich vorkommenden Bitterstoffe erfolgt bei der Maus durch 35 Bitterrezeptoren (Tas2rs), die sich im Zungengewebe befinden. Die Geschmacksinformation wird anschließend von der Zunge über das periphere (PNS) ins zentrale Nervensystem (ZNS) geleitet, wo deren Verarbeitung stattfindet. Die Verarbeitung der Geschmacksinformation konnte bislang nicht gänzlich aufgeklärt werden. Neue Studien deuten auf eine Expression von Tas2rs auch im PNS und ZNS entlang der Geschmacksbahn hin. Über Vorkommen und Aufgaben dieser Rezeptoren bzw. Rezeptorzellen im Nervensystem ist bislang wenig bekannt.
Im Rahmen dieser Arbeit wurde die Tas2r-Expression in verschiedenen Mausmodellen untersucht, Tas2r-exprimierende Zellen identifiziert und deren Funktionen bei der Übertragung der Geschmacksinformationen analysiert. Im Zuge der Expressionsanalysen mittels qRT-PCR konnte die Expression von 25 der 35 bekannten Bittergeschmacksrezeptoren im zentralen Nervensystem der Maus nachgewiesen werden. Die Expressionsmuster im PNS sowie im ZNS lassen darüber hinaus Vermutungen zu Funktionen in verschiedenen Bereichen des Nervensystems zu. Basierend auf den Ergebnissen der Expressionsanalysen war es möglich, stark exprimierte Tas2rs mittels In-situ-Hybridisierung in verschiedenen Zelltypen zu visualisieren. Des Weiteren konnten immunhistochemische Färbungen unter Verwendung eines genetisch modifizierten Mausmodells die Ergebnisse der Expressionsanalysen bestätigen. Sie zeigten eine Expression von Tas2rs, am Beispiel des Tas2r131-Rezeptors, in cholinergen, dopaminergen, GABAergen, noradrenergen und glycinerg-angesteuerten Projektionsneuronen sowie in Interneuronen. Die Ergebnisse der vorliegenden Arbeit zeigen daher erstmals das Vorkommen von Tas2rs in verschiedenen neuronalen Zelltypen in weiten Teilen des ZNS. Dies lässt den Schluss zu, dass Tas2r-exprimierende Zellen potentiell multiple Funktionen innehaben. Anhand von Verhaltensexperimenten in genetisch modifizierten Mäusen wurde die mögliche Funktion von Tas2r131-exprimierenden Neuronen (Tas2r131-Neurone) bei der Geschmackswahrnehmung untersucht. Die Ergebnisse weisen auf eine Beteiligung von Tas2r131-Neuronen an der Signalweiterleitung bzw. -verarbeitung der Geschmacksinformation für eine Auswahl von Bittersubstanzen hin. Die Analysen zeigen darüber hinaus, dass Tas2r131-Neuronen nicht an der Geschmackswahrnehmung anderer Bitterstoffe sowie Geschmacksstimuli anderer Qualitäten (süß, umami, sauer, salzig), beteiligt sind. Eine spezifische „Tas2r131-Bittergeschmacksbahn“, die mit anderen potentiellen „Bitterbahnen“ teils unabhängige, teils überlappende Signalwege bzw. Verarbeitungsbereiche besitzt, bildet eine mögliche zelluläre Grundlage zur Unterscheidung von Bitterstoffen. Die im Rahmen dieser Arbeit entstandene Hypothese einer potentiellen Diskriminierung von Bitterstoffen soll daher in weiterführenden Studien durch die Etablierung eines Verhaltenstest mit Mäusen geprüft werden.
A large body of research now supports the presence of both syntactic and lexical predictions in sentence processing. Lexical predictions, in particular, are considered to indicate a deep level of predictive processing that extends past the structural features of a necessary word (e.g. noun), right down to the phonological features of the lexical identity of a specific word (e.g. /kite/; DeLong et al., 2005). However, evidence for lexical predictions typically focuses on predictions in very local environments, such as the adjacent word or words (DeLong et al., 2005; Van Berkum et al., 2005; Wicha et al., 2004). Predictions in such local environments may be indistinguishable from lexical priming, which is transient and uncontrolled, and as such may prime lexical items that are not compatible with the context (e.g. Kukona et al., 2014). Predictive processing has been argued to be a controlled process, with top-down information guiding preactivation of plausible upcoming lexical items (Kuperberg & Jaeger, 2016). One way to distinguish lexical priming from prediction is to demonstrate that preactivated lexical content can be maintained over longer distances.
In this dissertation, separable German particle verbs are used to demonstrate that preactivation of lexical items can be maintained over multi-word distances. A self-paced reading time and an eye tracking experiment provide some support for the idea that particle preactivation triggered by a verb and its context can be observed by holding the sentence context constant and manipulating the predictabilty of the particle. Although evidence of an effect of particle predictability was only seen in eye tracking, this is consistent with previous evidence suggesting that predictive processing facilitates only some eye tracking measures to which the self-paced reading modality may not be sensitive (Staub, 2015; Rayner1998). Interestingly, manipulating the distance between the verb and the particle did not affect reading times, suggesting that the surprisal-predicted faster reading times at long distance may only occur when the additional distance is created by information that adds information about the lexical identity of a distant element (Levy, 2008; Grodner & Gibson, 2005). Furthermore, the results provide support for models proposing that temporal decay is not major influence on word processing (Lewandowsky et al., 2009; Vasishth et al., 2019).
In the third and fourth experiments, event-related potentials were used as a method for detecting specific lexical predictions. In the initial ERP experiment, we found some support for the presence of lexical predictions when the sentence context constrained the number of plausible particles to a single particle. This was suggested by a frontal post-N400 positivity (PNP) that was elicited when a lexical prediction had been violated, but not to violations when more than one particle had been plausible. The results of this study were highly consistent with previous research suggesting that the PNP might be a much sought-after ERP marker of prediction failure (DeLong et al., 2011; DeLong et al., 2014; Van Petten & Luka, 2012; Thornhill & Van Petten, 2012; Kuperberg et al., 2019). However, a second experiment in a larger sample experiment failed to replicate the effect, but did suggest the relationship of the PNP to predictive processing may not yet be fully understood. Evidence for long-distance lexical predictions was inconclusive.
The conclusion drawn from the four experiments is that preactivation of the lexical entries of plausible upcoming particles did occur and was maintained over long distances. The facilitatory effect of this preactivation at the particle site therefore did not appear to be the result of transient lexical priming. However, the question of whether this preactivation can also lead to lexical predictions of a specific particle remains unanswered. Of particular interest to future research on predictive processing is further characterisation of the PNP. Implications for models of sentence processing may be the inclusion of long-distance lexical predictions, or the possibility that preactivation of lexical material can facilitate reading times and ERP amplitude without commitment to a specific lexical item.
Much work has shown that differences in the timecourse of language processing are central to comparing native (L1) and non-native (L2) speakers. However, estimating the onset of experimental effects in timecourse data presents several statistical problems including multiple comparisons and autocorrelation. We compare several approaches to tackling these problems and illustrate them using an L1-L2 visual world eye-tracking dataset. We then present a bootstrapping procedure that allows not only estimation of an effect onset, but also of a temporal confidence interval around this divergence point. We describe how divergence points can be used to demonstrate timecourse differences between speaker groups or between experimental manipulations, two important issues in evaluating L2 processing accounts. We discuss possible extensions of the bootstrapping procedure, including determining divergence points for individual speakers and correlating them with individual factors like L2 exposure and proficiency. Data and an analysis tutorial are available at https://osf.io/exbmk/.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.