Refine
Year of publication
- 2022 (2397) (remove)
Document Type
- Article (1549)
- Doctoral Thesis (252)
- Postprint (163)
- Part of a Book (145)
- Monograph/Edited Volume (92)
- Review (49)
- Working Paper (30)
- Other (26)
- Part of Periodical (20)
- Conference Proceeding (18)
- Master's Thesis (18)
- Report (11)
- Bachelor Thesis (8)
- Contribution to a Periodical (8)
- Habilitation Thesis (3)
- Journal/Publication series (3)
- Course Material (2)
Language
- English (1682)
- German (683)
- Hebrew (10)
- French (9)
- Spanish (7)
- Italian (3)
- Multiple languages (2)
- Portuguese (1)
Keywords
- climate change (20)
- COVID-19 (16)
- machine learning (16)
- Germany (12)
- exercise (11)
- obesity (11)
- diffusion (10)
- gender (10)
- depression (9)
- Digitalisierung (8)
Institute
- Institut für Biochemie und Biologie (263)
- Institut für Physik und Astronomie (257)
- Extern (201)
- Institut für Geowissenschaften (170)
- Historisches Institut (115)
- Institut für Chemie (103)
- Bürgerliches Recht (102)
- Fachgruppe Politik- & Verwaltungswissenschaft (96)
- Öffentliches Recht (90)
- Institut für Umweltwissenschaften und Geographie (89)
Reducing greenhouse gas emissions in food systems is becoming more challenging as food is increasingly consumed away from producer regions, highlighting the need to consider emissions embodied in trade in agricultural emissions accounting.
To address this, our study explores recent trends in trade-adjusted agricultural emissions of food items at the global, regional, and national levels.
We find that emissions are largely dependent on a country’s consumption patterns and their agricultural emission intensities relative to their trading partners’.
The absolute differences between the production-based and trade-adjusted emissions accounting approaches are especially apparent for major agricultural exporters and importers and where large shares of emission-intensive items such as ruminant meat, milk products and rice are involved.
In relative terms, some low-income and emerging and developing economies with consumption of high emission intensity food products show large differences between approaches.
Similar trends are also found under various specifications that account for trade and re-exports differently.
These findings could serve as an important element towards constructing national emissions reduction targets that consider trading partners, leading to more effective emissions reductions overall.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Fragmentation of peptides leaves characteristic patterns in mass spectrometry data, which can be used to identify protein sequences, but this method is challenging for mutated or modified sequences for which limited information exist. Altenburg et al. use an ad hoc learning approach to learn relevant patterns directly from unannotated fragmentation spectra.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins.
Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge.
Here, to elevate unrestricted learning from spectra, we introduce 'ad hoc learning of fragmentation' (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments.
We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task.
Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
Dielectrophoresis (DEP) is an AC electrokinetic effect mainly used to manipulate cells. Smaller particles, like virions, antibodies, enzymes, and even dye molecules can be immobilized by DEP as well. In principle, it was shown that enzymes are active after immobilization by DEP, but no quantification of the retained activity was reported so far. In this study, the activity of the enzyme horseradish peroxidase (HRP) is quantified after immobilization by DEP. For this, HRP is immobilized on regular arrays of titanium nitride ring electrodes of 500 nm diameter and 20 nm widths. The activity of HRP on the electrode chip is measured with a limit of detection of 60 fg HRP by observing the enzymatic turnover of Amplex Red and H2O2 to fluorescent resorufin by fluorescence microscopy. The initial activity of the permanently immobilized HRP equals up to 45% of the activity that can be expected for an ideal monolayer of HRP molecules on all electrodes of the array. Localization of the immobilizate on the electrodes is accomplished by staining with the fluorescent product of the enzyme reaction. The high residual activity of enzymes after AC field induced immobilization shows the method's suitability for biosensing and research applications.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
This study examines the access to healthcare for children and adolescents with three common chronic diseases (type-1 diabetes (T1D), obesity, or juvenile idiopathic arthritis (JIA)) within the 4th (Delta), 5th (Omicron), and beginning of the 6th (Omicron) wave (June 2021 until July 2022) of the COVID-19 pandemic in Germany in a cross-sectional study using three national patient registries. A paper-and-pencil questionnaire was given to parents of pediatric patients (<21 years) during the routine check-ups. The questionnaire contains self-constructed items assessing the frequency of healthcare appointments and cancellations, remote healthcare, and satisfaction with healthcare. In total, 905 parents participated in the T1D-sample, 175 in the obesity-sample, and 786 in the JIA-sample. In general, satisfaction with healthcare (scale: 0–10; 10 reflecting the highest satisfaction) was quite high (median values: T1D 10, JIA 10, obesity 8.5). The proportion of children and adolescents with canceled appointments was relatively small (T1D 14.1%, JIA 11.1%, obesity 20%), with a median of 1 missed appointment, respectively. Only a few parents (T1D 8.6%; obesity 13.1%; JIA 5%) reported obstacles regarding health services during the pandemic. To conclude, it seems that access to healthcare was largely preserved for children and adolescents with chronic health conditions during the COVID-19 pandemic in Germany.
This study examines the access to healthcare for children and adolescents with three common chronic diseases (type-1 diabetes (T1D), obesity, or juvenile idiopathic arthritis (JIA)) within the 4th (Delta), 5th (Omicron), and beginning of the 6th (Omicron) wave (June 2021 until July 2022) of the COVID-19 pandemic in Germany in a cross-sectional study using three national patient registries. A paper-and-pencil questionnaire was given to parents of pediatric patients (<21 years) during the routine check-ups. The questionnaire contains self-constructed items assessing the frequency of healthcare appointments and cancellations, remote healthcare, and satisfaction with healthcare. In total, 905 parents participated in the T1D-sample, 175 in the obesity-sample, and 786 in the JIA-sample. In general, satisfaction with healthcare (scale: 0–10; 10 reflecting the highest satisfaction) was quite high (median values: T1D 10, JIA 10, obesity 8.5). The proportion of children and adolescents with canceled appointments was relatively small (T1D 14.1%, JIA 11.1%, obesity 20%), with a median of 1 missed appointment, respectively. Only a few parents (T1D 8.6%; obesity 13.1%; JIA 5%) reported obstacles regarding health services during the pandemic. To conclude, it seems that access to healthcare was largely preserved for children and adolescents with chronic health conditions during the COVID-19 pandemic in Germany.
This paper aims to contribute to exploring the design possibilities of robots for use in human-robot interaction. In an experiment, we investigate the influence of the human's personality and the robot's design, especially its humanization, on its acceptance. We use the Almere model, the Big 5 personality traits, and the anthropomorphic gestalt variants to build the foundation for our investigation. The assumption that an anthropomorphized robot variant would, in principle, be preferred to the standard variant when a natural choice is enforced could not be evidenced in our experiment. This allows for the interpretation that anthropomorphism does not necessarily lead to intentional perception and, consequently, does not guarantee that it can automatically generate acceptance.
The use of alternating current (AC) electrokinetic forces, like dielectrophoresis and AC electroosmosis, as a simple and fast method to immobilize sub-micrometer objects onto nanoelectrode arrays is presented. Due to its medical relevance, the influenza virus is chosen as a model organism. One of the outstanding features is that the immobilization of viral material to the electrodes can be achieved permanently, allowing subsequent handling independently from the electrical setup. Thus, by using merely electric fields, we demonstrate that the need of prior chemical surface modification could become obsolete. The accumulation of viral material over time is observed by fluorescence microscopy. The influences of side effects like electrothermal fluid flow, causing a fluid motion above the electrodes and causing an intensity gradient within the electrode array, are discussed. Due to the improved resolution by combining fluorescence microscopy with deconvolution, it is shown that the viral material is mainly drawn to the electrode edge and to a lesser extent to the electrode surface. Finally, areas of application for this functionalization technique are presented.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
There is a longstanding and widely held misconception about the relative remoteness of abstract concepts from concrete experiences. This review examines the current evidence for external influences and internal constraints on the processing, representation, and use of abstract concepts, like truth, friendship, and number. We highlight the theoretical benefit of distinguishing between grounded and embodied cognition and then ask which roles do perception, action, language, and social interaction play in acquiring, representing and using abstract concepts. By reviewing several studies, we show that they are, against the accepted definition, not detached from perception and action. Focussing on magnitude-related concepts, we also discuss evidence for cultural influences on abstract knowledge and explore how internal processes such as inner speech, metacognition, and inner bodily signals (interoception) influence the acquisition and retrieval of abstract knowledge. Finally, we discuss some methodological developments. Specifically, we focus on the importance of studies that investigate the time course of conceptual processing and we argue that, because of the paramount role of sociality for abstract concepts, new methods are necessary to study concepts in interactive situations. We conclude that bodily, linguistic, and social constraints provide important theoretical limitations for our theories of conceptual knowledge.
We study the diffusive motion of a particle in a subharmonic potential of the form U(x) = |x|( c ) (0 < c < 2) driven by long-range correlated, stationary fractional Gaussian noise xi ( alpha )(t) with 0 < alpha <= 2. In the absence of the potential the particle exhibits free fractional Brownian motion with anomalous diffusion exponent alpha. While for an harmonic external potential the dynamics converges to a Gaussian stationary state, from extensive numerical analysis we here demonstrate that stationary states for shallower than harmonic potentials exist only as long as the relation c > 2(1 - 1/alpha) holds. We analyse the motion in terms of the mean squared displacement and (when it exists) the stationary probability density function. Moreover we discuss analogies of non-stationarity of Levy flights in shallow external potentials.
Der vorliegende Abschlussbericht umfasst die Ergebnisse der wissenschaftlichen Evaluation der Werkstatt „Schule leiten“. Bei dieser Werkstatt handelt es sich um ein 18-monatiges Fortbildungsangebot für Schulleitungen, das durch die Deutsche Schulakademie konzipiert und in Kooperation mit dem saarländischen Kultusministerium für Bildung und Kultur sowie dem saarländischen Landesinstitut für Pädagogik und Medien durchgeführt wurde. Im Zeitraum 2016–2020 absolvierten jeweils zwei Personen des Schulleitungsteams allgemeinbildender Schulen erstmalig in insgesamt drei Durchgängen verschiedene Angebote der Werkstatt. Weiterhin erhielten die Teilnehmenden die Aufgaben, im Zuge ihres Fortbildungsbesuches ein individuelles Schulentwicklungsprojekt zu planen, zu entwickeln und unter Anleitung der Werkstatt in der Schule zu implementieren. Zur Überprüfung der wahrgenommenen Qualität sowie der Wirksamkeit des Fortbildungsangebotes wurde die Universität Potsdam, Arbeitsbereich Prof. Dr. Dirk Richter, beauftragt. Der vorliegende Bericht präsentiert die Evaluationsergebnisse der Durchgänge 2 und 3.
Im Zentrum der Evaluation stehen die folgenden Forschungsfragen: (1) Wie beurteilen die Teilnehmenden die Qualität der Werkstatt „Schule leiten“? (2) Inwiefern hat die Werkstatt „Schule leiten“ dazu beigetragen, die Leitungskompetenzen (u.a. Einstellungen und Führungshandeln) der Teilnehmenden zu stärken? sowie (3) Wie haben sich schulische Strukturen und Prozesse zur Förderung von Schulentwicklung in den teilnehmenden Schulen durch die Werkstatt „Schule leiten“ verändert? Zur Beantwortung der Fragestellungen wurden eine Reihe schriftlicher Befragungen mit den teilnehmenden Schulleitungen sowie den Lehrkräften der teilnehmenden Schulen durchgeführt. Diese Befragungen erfolgten sowohl begleitend zum Fortbildungsprogramm (nach Absolvieren der einzelnen Angebote) sowie in einem Prä-Post-Follow-Up-Design. Weiterhin wurden im Rahmen einer qualitativen Begleitstudie verschiedene Personen (Schulleitung, Mitglieder des Schulleitungsteams, Lehrkräfte) von insgesamt fünf Schulen über drei Zeitpunkte dazu befragt, wie die Planung, Entwicklung und Implementation der Schulentwicklungsprojekte erfolgten.
Die Befunde der Evaluation deuten insgesamt darauf hin, dass die Qualität der Werkstatt „Schule leiten“ als Gesamtprogramm sowie die einzelnen Angebote der Werkstatt sehr positiv bewertet werden. Dabei nehmen die Teilnehmenden von Durchgang 2 die Werkstatt in einer Reihe von Merkmalen positiver wahr als die Teilnehmenden von Durchgang 3. Weiterhin deuten die Ergebnisse darauf hin, dass sich bestimmte Facetten des Führungshandelns der Teilnehmenden im zeitlichen Verlauf positiv verändert haben. Hierfür existieren Hinweise sowohl aus Perspektive der Teilnehmenden selbst als auch aus Perspektive der Lehrkräfte ihrer Schulen. Motivationale Merkmale der Teilnehmenden sowie Aspekte des Führungshandelns, die sich auf Tätigkeiten zur Unterstützung der innerschulischen Kooperation beziehen, bleiben aus Perspektive der Teilnehmenden hingegen weitestgehend konstant. Hinsichtlich der Veränderungen schulischer Strukturen zur Schulentwicklung deuten die Ergebnisse auf eine positive Entwicklung der wahrgenommenen Offenheit gegenüber Kooperation im Kollegium aus Perspektive der Lehrkräfte hin. Die Befunde der qualitativen Begleitstudie liefern weitere Informationen über die wahrgenommene Qualität der Werkstatt sowie über Veränderungen aufseiten der Teilnehmenden und der schulischen Strukturen.
Der vorliegende Abschlussbericht präsentiert die Ergebnisse des BMBF-geförderten Verbundprojektes "E-LANE: E-Learning in der Lehrerfortbildung: Angebote, Nutzung und Erträge", das gemeinsam durch die Universität Potsdam (Prof. Dr. Dirk Richter) und der Leuphana Universität Lüneburg (Prof. Dr. Marc Kleinknecht) durchgeführt wurde. Ziel des Projektes war die Untersuchung des Angebotes von digitalen bzw. digital-gestützten Fortbildungen für Lehrkräfte in den Bundesländern Berlin, Brandenburg und Schleswig-Holstein. Im Rahmen von vier Teilstudien wurden Datenbankanalysen der Fortbildungsangebote in den jeweiligen Ländern sowie schriftliche Befragungen mit Fortbildner*innen sowie Teilnehmer*innen von Online-Fortbildungen durchgeführt. Darüber hinaus wurde eine Online-Fortbildung für Lehrkräfte zum Thema Feedback eigens konzipiert und durchgeführt.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
Der Beitrag fragt nach Anzeichen einer transmedialen Ästhetik in Texten, die Josef Winkler um die Jahrtausendwende veröffentlicht hat. In einem ersten Teil werden inhaltliche und strukturelle Gemeinsamkeiten von Natura morta (1998) und Wenn es soweit ist (2001) herausgearbeitet. Der Fokus der Untersuchung liegt auf dem Verfahren der Wiederholung und damit verbunden auf Kontinuitäten zwischen den beiden Texten. Zweitens werden die Organisations- und Strukturprinzipien der Texte anhand zweier räumlicher Modelle, des Ablagerungs- und Ansammlungsmodells, analysiert und miteinander verglichen. Drittens werden ausgehend vom Prinzip der Wiederholung, das den beiden Modellen Ablagerung und Ansammlung zugrunde liegt, performative Aspekte von Winklers Erzählen aufgezeigt und erste Spuren einer transmedialen Ästhetik in Winklers Œuvre freigelegt.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
Little is known about the current state of research on the involvement of young people in hate speech. Thus, this systematic review presents findings on a) the prevalence of hate speech among children and adolescents and on hate speech definitions that guide prevalence assessments for this population; and b) the theoretical and empirical overlap of hate speech with related concepts. This review was guided by the Cochrane approach. To be included, publications were required to deal with real-life experiences of hate speech, to provide empirical data on prevalence for samples aged 5 to 21 years and they had to be published in academic formats. Included publications were full-text coded using two raters (kappa = .80) and their quality was assessed. The string-guided electronic search (ERIC, SocInfo, Psycinfo, Psyndex) yielded 1,850 publications. Eighteen publications based on 10 studies met the inclusion criteria and their findings were systematized. Twelve publications were of medium quality due to minor deficiencies in their theoretical or methodological foundations. All studies used samples of adolescents and none of younger children. Nine out of 10 studies applied quantitative methodologies. Eighteen publications based on 10 studies were included. Results showed that frequencies for hate speech exposure were higher than those related to victimization and perpetration. Definitions of hate speech and assessment instruments were heterogeneous. Empirical evidence for an often theorized overlap between hate speech and bullying was found. The paper concludes by presenting a definition of hate speech, including implications for practice, policy, and research.
Groundwater recharge (GWR) is one of the most challenging water fluxes to estimate, as it relies on observed data that are often limited in many developing countries.
This study developed an innovative water budget method using satellite products for estimating the spatially distributed GWR at monthly and annual scales in tropical wet sedimentary regions despite cloudy conditions.
The distinctive features proposed in this study include the capacity to address 1) evapotranspiration estimations in tropical wet regions frequently overlaid by substantial cloud cover; and 2) seasonal root-zone water storage estimations in sedimentary regions prone to monthly variations.
The method also utilises satellite-based information of the precipitation and surface runoff. The GWR was estimated and validated for the hydrologically contrasting years 2016 and 2017 over a tropical wet sedimentary region located in North-eastern Brazil, which has substantial potential for groundwater abstraction.
This study showed that applying a cloud-cleaning procedure based on monthly compositions of biophysical data enables the production of a reasonable proxy for evapotranspiration able to estimate groundwater by the water budget method.
The resulting GWR rates were 219 (2016) and 302 (2017) mm yr(-1), showing good correlations (CC = 0.68 to 0.83) and slight underestimations (PBIAS =-13 to-9%) when compared with the referenced estimates obtained by the water table fluctuation method for 23 monitoring wells. Sensitivity analysis shows that water storage changes account for +19% to-22% of our monthly evaluation.
The satellite-based approach consistently demonstrated that the consideration of cloud-cleaned evapotranspiration and root-zone soil water storage changes are essential for a proper estimation of spatially distributed GWR in tropical wet sedimentary regions because of their weather seasonality and cloudy conditions.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
Starch is a complex carbohydrate polymer produced by plants and especially by crops in huge amounts. It consists of amylose and amylopectin, which have alpha-1,4-and alpha-1,6-linked glucose units. Despite this simple chemistry, the entire starch metabolism is complex, containing various (iso)enzymes/proteins. However, whose interplay is still not yet fully understood. Starch is essential for humans and animals as a source of nutrition and energy. Nowadays, starch is also commonly used in non-food industrial sectors for a variety of purposes. However, native starches do not always satisfy the needs of a wide range of (industrial) applications. This review summarizes the structural properties of starch, analytical methods for starch characterization, and in planta starch modifications.
A review of source models to further the understanding of the seismicity of the Groningen field
(2022)
The occurrence of felt earthquakes due to gas production in Groningen has initiated numerous studies and model attempts to understand and quantify induced seismicity in this region. The whole bandwidth of available models spans the range from fully deterministic models to purely empirical and stochastic models. In this article, we summarise the most important model approaches, describing their main achievements and limitations. In addition, we discuss remaining open questions and potential future directions of development.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
Fluctuating asymmetries (FA) are small stress-induced random deviations from perfect symmetry that arise during the development of bilaterally symmetrical traits. One of the factors that can reduce developmental stability of the individuals and cause FA at a population level is the loss of genetic variation. Populations of founding colonists frequently have lower genetic variation than their ancestral populations that could be reflected in a higher level of FA. The European starling (Sturnus vulgaris) is native to Eurasia and was introduced successfully in the USA in 1890 and Argentina in 1983. In this study, we documented the genetic diversity and FA of starlings from England (ancestral population), USA (primary introduction) and Argentina (secondary introduction). We predicted the Argentinean starlings would have the highest level of FA and lowest genetic diversity of the three populations. We captured wild adult European starlings in England, USA, and Argentina, measured their mtDNA diversity and allowed them to molt under standardized conditions to evaluate their FA of primary feathers. For genetic analyses, we extracted DNA from blood samples of individuals from Argentina and USA and from feather samples from individuals from England and sequenced the mitochondrial control region. Starlings in Argentina showed the highest composite FA and exhibited the lowest haplotype and nucleotide diversity. The USA population showed a level of FA and genetic diversity similar to the native population. Therefore, the level of asymmetry and genetic diversity found among these populations was consistent with our predictions based on their invasion history.
As the use of free electron laser (FEL) sources increases, so do the findings mentioning non-linear phenomena occurring at these experiments, such as saturable absorption, induced transparency and scattering breakdowns. These are well known among the laser community, but are still rarely understood and expected among the X-ray community and to date lack tools and theories to accurately predict the respective experimental parameters and results. We present a simple theoretical framework to access short X-ray pulse induced light- matter interactions which occur at intense short X-ray pulses as available at FEL sources. Our approach allows to investigate effects such as saturable absorption, induced transparency and scattering suppression, stimulated emission, and transmission spectra, while including the density of state influence relevant to soft X-ray spectroscopy in, for example, transition metal complexes or functional materials. This computationally efficient rate model based approach is intuitively adaptable to most solid state sample systems in the soft X-ray spectrum with the potential to be extended for liquid and gas sample systems as well. The feasibility of the model to estimate the named effects and the influence of the density of state is demonstrated using the example of CoPd transition metal systems at the Co edge. We believe this work is an important contribution for the preparation, performance, and understanding of FEL based high intensity and short pulse experiments, especially on functional materials in the soft X-ray spectrum.
The digital transformation sets new requirements to all classes of enterprise systems in companies. ERP systems in particular, which represent the dominant class of enterprise systems, are struggling to meet the new requirements at all levels of the architecture. Therefore, there is an urgent need to reconsider the overall architecture of the systems and address the root of the related issues. Given that many restrictions ERP pose on their adaptability are related to the standardization of data, the database layer of ERP systems is addressed. Since database serve as the foundation for data storage and retrieval, they limit the flexibility of enterprise systems and the chance to adapt to new requirements accordingly. So far, relational databases are widely used. Using a systematic literature approach, recent requirements for ERP systems were identified. Prominent database approaches were assessed against the 23 requirements identified. The results reveal the strengths and weaknesses of recent database approaches. To this end, the results highlight the demand to combine multiple database approaches to fulfill recent business requirements. From a conceptual point of view, this paper supports the idea of federated databases which are interoperable to fulfill future requirements and support business operation. This research forms the basis for renewal of the current generation of ERP systems and proposes to ERP vendors to use different database concepts in the future.
Hegel's many remarks that seem to imply that philosophy should proceed completely a priori pose a problem for his philosophy of nature since, on this reading, Hegel offers an a priori derivation of empirical results of natural sciences. We show how this perception can be mitigated by interpreting Hegel's remarks as broadly in line with the pre-Kantian rationalist notion of a priori and offer reasons for doing so. We show that, rather than being a peculiarity of Hegel's philosophy, the practice of demonstrating a priori the results of empirical sciences was widespread in the pre-Kantian rationalist tradition. We argue that this practice was intelligible in light of the notion of a priori that was still quite prominent during Hegel's life. This notion of a priori differs from Kant's in that, while the latter's notion concerns propositions, the former concerned only their demonstration. According to it, the same proposition could be demonstrated both a posteriori and a priori. Post-Kantian idealists likewise developed projects of demonstrating specific scientific contents a priori. We then make our discussion more concrete by examining a particular case of an a priori derivation of a natural law, namely the law of fall, by both Leibniz and Hegel.
A novel approach for estimating precipitation patterns is developed here and applied to generate a new hydrologically corrected daily precipitation dataset, called RAIN4PE (Rain for Peru and Ecuador), at 0.1 degrees spatial resolution for the period 1981-2015 covering Peru and Ecuador. It is based on the application of 1) the random forest method to merge multisource precipitation estimates (gauge, satellite, and reanalysis) with terrain elevation, and 2) observed and modeled streamflow data to first detect biases and second further adjust gridded precipitation by inversely applying the simulated results of the ecohydrological model SWAT (Soil and Water Assessment Tool). Hydrological results using RAIN4PE as input for the Peruvian and Ecuadorian catchments were compared against the ones when feeding other uncorrected (CHIRP and ERA5) and gauge-corrected (CHIRPS, MSWEP, and PISCO) precipitation datasets into the model. For that, SWAT was calibrated and validated at 72 river sections for each dataset using a range of performance metrics, including hydrograph goodness of fit and flow duration curve signatures. Results showed that gauge-corrected precipitation datasets outperformed uncorrected ones for streamflow simulation. However, CHIRPS, MSWEP, and PISCO showed limitations for streamflow simulation in several catchments draining into the Pacific Ocean and the Amazon River. RAIN4PE provided the best overall performance for streamflow simulation, including flow variability (low, high, and peak flows) and water budget closure. The overall good performance of RAIN4PE as input for hydrological modeling provides a valuable criterion of its applicability for robust countrywide hydrometeorological applications, including hydroclimatic extremes such as droughts and floods. Significance StatementWe developed a novel precipitation dataset RAIN4PE for Peru and Ecuador by merging multisource precipitation data (satellite, reanalysis, and ground-based precipitation) with terrain elevation using the random forest method. Furthermore, RAIN4PE was hydrologically corrected using streamflow data in watersheds with precipitation underestimation through reverse hydrology. The results of a comprehensive hydrological evaluation showed that RAIN4PE outperformed state-of-the-art precipitation datasets such as CHIRP, ERA5, CHIRPS, MSWEP, and PISCO in terms of daily and monthly streamflow simulations, including extremely low and high flows in almost all Peruvian and Ecuadorian catchments. This underlines the suitability of RAIN4PE for hydrometeorological applications in this region. Furthermore, our approach for the generation of RAIN4PE can be used in other data-scarce regions.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
In postsocialist Potsdam, religious diversity has risen surprisingly in public life since 1990 although more than 80% of the residents have no religious affiliation. City and state authorities have actively embraced issues around immigration and integration as well as the promotion of religious diversity and interreligious dialogue and have linked this to the agenda of rejuvenating the city’s religious heritage. For years, negotiations have been going on about the need of a mosque, the reconstructions of a synagogue and the so-called “Garrison Church,” a landmark military church building. These initiatives have been dominating the public space for different reasons. They implied, beyond religion, questions of memory, identity, immigration, and culture. This article puts these three cases into perspective to offer a nuanced understanding of the importance of religious spaces in secular contexts considering city politics.
Van Allen Probes measurements revealed the presence of the most unusual structures in the ultra-relativistic radiation belts. Detailed modeling, analysis of pitch angle distributions, analysis of the difference between relativistic and ultra-realistic electron evolution, along with theoretical studies of the scattering and wave growth, all indicate that electromagnetic ion cyclotron (EMIC) waves can produce a very efficient loss of the ultra-relativistic electrons in the heart of the radiation belts. Moreover, a detailed analysis of the profiles of phase space densities provides direct evidence for localized loss by EMIC waves. The evolution of multi-MeV fluxes shows dramatic and very sudden enhancements of electrons for selected storms. Analysis of phase space density profiles reveals that growing peaks at different values of the first invariant are formed at approximately the same radial distance from the Earth and show the sequential formation of the peaks from lower to higher energies, indicating that local energy diffusion is the dominant source of the acceleration from MeV to multi-MeV energies. Further simultaneous analysis of the background density and ultra-relativistic electron fluxes shows that the acceleration to multi-MeV energies only occurs when plasma density is significantly depleted outside of the plasmasphere, which is consistent with the modeling of acceleration due to chorus waves.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
A new evidence-based diet score to capture associations of food consumption and chronic disease risk
(2022)
Previously, the attempt to compile German dietary guidelines into a diet score was predominantly not successful with regards to preventing chronic diseases in the EPIC-Potsdam study. Current guidelines were supplemented by the latest evidence from systematic reviews and expert papers published between 2010 and 2020 on the prevention potential of food groups on chronic diseases such as type 2 diabetes, cardiovascular diseases and cancer. A diet score was developed by scoring the food groups according to a recommended low, moderate or high intake. The relative validity and reliability of the diet score, assessed by a food frequency questionnaire, was investigated. The consideration of current evidence resulted in 10 key food groups being preventive of the chronic diseases of interest. They served as components in the diet score and were scored from 0 to 1 point, depending on their recommended intake, resulting in a maximum of 10 points. Both the reliability (r = 0.53) and relative validity (r = 0.43) were deemed sufficient to consider the diet score as a stable construct in future investigations. This new diet score can be a promising tool to investigate dietary intake in etiological research by concentrating on 10 key dietary determinants with evidence-based prevention potential for chronic diseases.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
In this study, we model a sequence of a confined and a full eruption, employing the relaxed end state of the confined eruption of a kink-unstable flux rope as the initial condition for the ejective one. The full eruption, a model of a coronal mass ejection, develops as a result of converging motions imposed at the photospheric boundary, which drive flux cancellation. In this process, parts of the positive and negative external flux converge toward the polarity inversion line, reconnect, and cancel each other. Flux of the same amount as the canceled flux transfers to a flux rope, increasing the free magnetic energy of the coronal field. With sustained flux cancellation and the associated progressive weakening of the magnetic tension of the overlying flux, we find that a flux reduction of approximate to 11% initiates the torus instability of the flux rope, which leads to a full eruption. These results demonstrate that a homologous full eruption, following a confined one, can be driven by flux cancellation.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
Older adults with amnestic mild cognitive impairment (aMCI) who in addition to their memory deficits also suffer from frontal-executive dysfunctions have a higher risk of developing dementia later in their lives than older adults with aMCI without executive deficits and older adults with non-amnestic MCI (naMCI). Handgrip strength (HGS) is also correlated with the risk of cognitive decline in the elderly. Hence, the current study aimed to investigate the associations between HGS and executive functioning in individuals with aMCI, naMCI and healthy controls. Older, right-handed adults with amnestic MCI (aMCI), non-amnestic MCI (naMCI), and healthy controls (HC) conducted a handgrip strength measurement via a handheld dynamometer. Executive functions were assessed with the Trail Making Test (TMT A&B). Normalized handgrip strength (nHGS, normalized to Body Mass Index (BMI)) was calculated and its associations with executive functions (operationalized through z-scores of TMT B/A ratio) were investigated through partial correlation analyses (i.e., accounting for age, sex, and severity of depressive symptoms). A positive and low-to-moderate correlation between right nHGS (rp (22) = 0.364; p = 0.063) and left nHGS (rp (22) = 0.420; p = 0.037) and executive functioning in older adults with aMCI but not in naMCI or HC was observed. Our results suggest that higher levels of nHGS are linked to better executive functioning in aMCI but not naMCI and HC. This relationship is perhaps driven by alterations in the integrity of the hippocampal-prefrontal network occurring in older adults with aMCI. Further research is needed to provide empirical evidence for this assumption.
Older adults with amnestic mild cognitive impairment (aMCI) who in addition to their memory deficits also suffer from frontal-executive dysfunctions have a higher risk of developing dementia later in their lives than older adults with aMCI without executive deficits and older adults with non-amnestic MCI (naMCI). Handgrip strength (HGS) is also correlated with the risk of cognitive decline in the elderly. Hence, the current study aimed to investigate the associations between HGS and executive functioning in individuals with aMCI, naMCI and healthy controls. Older, right-handed adults with amnestic MCI (aMCI), non-amnestic MCI (naMCI), and healthy controls (HC) conducted a handgrip strength measurement via a handheld dynamometer. Executive functions were assessed with the Trail Making Test (TMT A&B). Normalized handgrip strength (nHGS, normalized to Body Mass Index (BMI)) was calculated and its associations with executive functions (operationalized through z-scores of TMT B/A ratio) were investigated through partial correlation analyses (i.e., accounting for age, sex, and severity of depressive symptoms). A positive and low-to-moderate correlation between right nHGS (rp (22) = 0.364; p = 0.063) and left nHGS (rp (22) = 0.420; p = 0.037) and executive functioning in older adults with aMCI but not in naMCI or HC was observed. Our results suggest that higher levels of nHGS are linked to better executive functioning in aMCI but not naMCI and HC. This relationship is perhaps driven by alterations in the integrity of the hippocampal-prefrontal network occurring in older adults with aMCI. Further research is needed to provide empirical evidence for this assumption.
We use the prolonged Greek crisis as a case study to understand how a lasting economic shock affects the innovation strategies of firms in economies with moderate innovation activities. Adopting the 3-stage CDM model, we explore the link between R&D, innovation, and productivity for different size groups of Greek manufacturing firms during the prolonged crisis. At the first stage, we find that the continuation of the crisis is harmful for the R&D engagement of smaller firms while it increased the willingness for R&D activities among the larger ones. At the second stage, among smaller firms the knowledge production remains unaffected by R&D investments, while among larger firms the R&D decision is positively correlated with the probability of producing innovation, albeit the relationship is weakened as the crisis continues. At the third stage, innovation output benefits only larger firms in terms of labor productivity, while the innovation-productivity nexus is insignificant for smaller firms during the lasting crisis.