Refine
Year of publication
- 2024 (227)
- 2023 (1336)
- 2022 (2445)
- 2021 (2428)
- 2020 (2819)
- 2019 (2590)
- 2018 (2819)
- 2017 (2525)
- 2016 (2358)
- 2015 (2172)
- 2014 (1892)
- 2013 (2109)
- 2012 (1999)
- 2011 (2047)
- 2010 (1445)
- 2009 (1835)
- 2008 (1363)
- 2007 (1396)
- 2006 (1807)
- 2005 (1956)
- 2004 (2021)
- 2003 (1553)
- 2002 (1354)
- 2001 (1425)
- 2000 (1686)
- 1999 (1852)
- 1998 (1690)
- 1997 (1541)
- 1996 (1557)
- 1995 (1473)
- 1994 (1031)
- 1993 (405)
- 1992 (255)
- 1991 (169)
- 1990 (16)
- 1989 (28)
- 1988 (22)
- 1987 (23)
- 1986 (16)
- 1985 (12)
- 1984 (15)
- 1983 (31)
- 1982 (10)
- 1981 (9)
- 1980 (10)
- 1979 (15)
- 1978 (9)
- 1977 (12)
- 1976 (7)
- 1975 (3)
- 1974 (2)
- 1973 (2)
- 1972 (2)
- 1971 (2)
- 1970 (1)
- 1958 (1)
Document Type
- Article (35842)
- Doctoral Thesis (6497)
- Monograph/Edited Volume (5559)
- Postprint (3296)
- Review (2290)
- Part of a Book (1068)
- Other (922)
- Preprint (567)
- Conference Proceeding (547)
- Part of Periodical (527)
Language
- English (30801)
- German (26068)
- Spanish (363)
- French (330)
- Italian (115)
- Russian (112)
- Multiple languages (70)
- Hebrew (36)
- Portuguese (25)
- Polish (24)
Keywords
- Germany (202)
- climate change (183)
- Deutschland (142)
- machine learning (87)
- European Union (79)
- diffusion (78)
- Sprachtherapie (77)
- Migration (74)
- morphology (74)
- Logopädie (73)
Institute
- Institut für Biochemie und Biologie (5470)
- Institut für Physik und Astronomie (5441)
- Institut für Geowissenschaften (3666)
- Institut für Chemie (3484)
- Wirtschaftswissenschaften (2645)
- Historisches Institut (2514)
- Department Psychologie (2351)
- Institut für Mathematik (2150)
- Institut für Romanistik (2108)
- Sozialwissenschaften (1883)
We would like to inform the readers and editors of the journal that we have discovered some errors in the references of our paper. These errors were brought to our attention by a reader who noticed some inconsistencies between the citations in the text and the bibliography. Upon further investigation, we realized that our literature management software had mistakenly linked some of the references to wrong or non-existent sources. We apologize for this oversight and assure you that it did not affect the validity or quality of our arguments and results, which were based on the correct sources. Below you find a list of the incorrect references along with their corresponding correct ones. We hope that this correction statement will clarify any confusion or misunderstanding that may have arisen from this mistake. The authors would like to apologise for any inconvenience caused.
Der Semi-Parlamentarismus beschreibt das Regierungssystem, in dem die Regierung von einem Teil des Parlaments gewählt wird und abberufen werden kann, von einem anderen Teil des Parlaments aber unabhängig ist. Beide Kammern müssen dabei der Gesetzgebung zustimmen. Dieses von Steffen Ganghof klassifizierte System ergänzt gängige Regierungssystemtypologien, wie sie beispielsweise von David Samuels und Matthew Shugart genutzt werden. Der Semi-Parlamentarismus ist der logische Gegenpart zum Semi-Präsidentialismus, bei dem nur ein Teil der Exekutive von der Legislative abhängt, während im Semi-Parlamentarismus die Exekutive von nur einem Teil der Legislative abhängt. Der Semi-Parlamentarismus verkörpert so ein System der Gewaltenteilung ohne einen exekutiven Personalismus, wie er durch die Direktwahl und Unabhängigkeit der Regierungchef:in im Präsidentialismus hervorgerufen wird. Dadurch ist der Semi-Parlamentarismus geeignet, Unterschiede zwischen Parlamentarismus und Präsidentialismus auf den separaten Einfluss der Gewaltenteilung und des exekutiven Personalismus zurückzuführen. Die Untersuchung des Semi-Parlamentarismus ist daher für die Regierungssystemliteratur insgesamt von Bedeutung. Der Semi-Parlamentarismus ist dabei kein rein theoretisches Konstrukt, sondern existiert im australischen Bundesstaat, den australischen Substaaten und Japan.
Die vorliegende Dissertation untersucht erstmals umfassend die Gesetzgebung der semi-parlamentarischen Staaten als solchen. Der Fokus liegt dabei auf den zweiten Kammern, da diese durch die Unabhängigkeit von der Regierung der eigentliche Ort der Gesetzgebung sind. Die Gesetzgebung in Parlamentarismus und Präsidentialismus unterscheidet sich insbesondere in der Geschlossenheit der Parteien, der Koalitionsbildung und dem legislativen Erfolg der Regierungen. Diese Punkte sind daher auch von besonderem Interesse bei der Analyse des Semi-Parlamentarismus. Die semi-parlamentarischen Staaten unterscheiden sich auch untereinander teilweise erheblich in der institutionellen Ausgestaltung wie den Wahlsystemen oder den verfügbaren Mitteln zur Überwindung von Blockadesituationen. Die Darstellung und die Analyse der Auswirkungen dieser Unterschiede auf die Gesetzgebung ist neben dem Vergleich des Semi-Parlamentarismus mit anderen Systemen das zweite wesentliche Ziel dieser Arbeit.
Als Fundament der Analyse habe ich einen umfangreichen Datensatz erhoben, der alle Legislaturperioden der australischen Staaten zwischen 1997 und 2019 umfasst. Wesentliche Bestandteile des Datensatzes sind alle namentlichen Abstimmungen beider Kammern, alle
eingebrachten und verabschiedeten Gesetzen der Regierung sowie die mit Hilfe eines Expert-Surveys erhobenen Parteipositionen in den relevanten Politikfeldern auf substaatlicher Ebene.
Hauptsächlich mit der Hilfe von Mixed-Effects- und Fractional-Response-Analysen kann ich so zeigen, dass der Semi-Parlamentarismus in vielen Aspekten eher parlamentarischen als präsidentiellen Systemen gleicht. Nur die Koalitionsbildung erfolgt deutlich flexibler und unterscheidet sich daher von der typischen parlamentarischen Koalitionsbildung. Die Analysen legen nahe, dass wesentliche Unterschiede zwischen Parlamentarismus und Präsidentialismus eher auf den exekutiven Personalismus als auf die Gewaltenteilung zurückzuführen sind.
Zwischen den semi-parlamentarischen Staaten scheinen vor allem die Kontrolle des Medians beider Parlamentskammern durch die Regierung und die Möglichkeit der Regierung, die zweite Kammer mitaufzulösen, zu entscheidenden Unterschieden in der Gesetzgebung zu führen. Die Kontrolle des Medians ermöglicht eine flexible Koalitionsbildung und führt zu höheren legislativen Erfolgsraten. Ebenso führt eine möglichst leichte Auflösungsmöglichkeit der zweiten Kammern zu höheren legislativen Erfolgsraten. Die Parteigeschlossenheit ist unabhängig von diesen Aspekten in beiden Kammern der semi-parlamentarischen Parlamente sehr hoch.
Purpose
This study investigates the communication behavior of public health organizations on Twitter during the COVID-19 vaccination campaign in Brazil. It contributes to the understanding of the organizational framing of health communication by showcasing several instances of framing devices that borrow from (Brazilian) internet culture. The investigation of this case extends the knowledge by providing a rich description of the organizational framing of health communication to combat misinformation in a politically charged environment.
Design/methodology/approach
The authors collected a Twitter dataset of 77,527 tweets and analyzed a purposeful subsample of 536 tweets that contained information provided by Brazilian public health organizations about COVID-19 vaccination campaigns. The data analysis was carried out quantitatively and qualitatively by combining social media analytics techniques and frame analysis.
Findings
The analysis showed that Brazilian health organizations used several framing devices that have been identified by previous literature such as hashtags, links, emojis or images. However, the analysis also unearthed hitherto unknown visual framing devices for misinformation prevention and debunking that borrow from internet culture such as “infographics,” “pop culture references” and “internet-native symbolism.”
Research limitations/implications
First, the identification of framing devices relating to internet culture add to our understanding of the so far little addressed framing of misinformation combat messages. The case of Brazilian health organizations provides a novel perspective to knowledge by offering a notion of internet-native symbols (e.g. humor, memes) and popular culture references for misinformation combat, including misinformation prevention. Second, this study introduces a frontier of political contextualization to misinformation research that does not relate to the partisanship of the spreaders but that relates to the political dilemmas of public organizations with a commitment to provide accurate information to citizens.
Practical implications
The findings inform decision-makers and public health organizations about framing devices that are tailored to internet-native audiences and can guide strategies to carry out information campaigns in misinformation-laden social media environments.
Social implications
The findings of this case study expose the often-overlooked cultural peculiarities of framing information campaigns on social media. The report of this study from a country in the Global South helps to contrast several assumptions and strategies that are prevalent in (health) discourses in Western societies and scholarship.
Originality/value
This study uncovers unconventional and barely addressed framing devices of health organizations operating in Brazil, which provides a novel perspective to the body of research on misinformation. It contributes to existing knowledge about frame analysis and broadens the understanding of frame devices borrowing from internet culture. It is a call for a frontier in misinformation research that deals with internet culture as part of organizational strategies for successful misinformation combat.
Purpose
Kettle holes are small inland water bodies known to be dominated by terrigenous material; however, the processes and structures that drive the enrichment and depletion of specific geochemical elements in the water column and kettle hole sediment remain unclear. We hypothesized that the mobile elements (Ca, Fe, K, P) behave different from each other in their transport, intermediate soil retention, and final accumulation in the kettle hole sediment.
Methods
Topsoils from transects spanning topographic positions from erosional to depositional areas, sediment cores, shallow groundwater, and kettle hole water of two glacial kettle holes in NE Germany (Rittgarten (RG) and Kraatz (KR)) were collected. The Fe, Ca, K, and total P (TP) concentrations were quantified and additionally the major anions in shallow groundwater and kettle hole water. The element-specific mobilization, relocation, and, finally, accumulation in the sediment were investigated by enrichment factors. Furthermore, a piper diagram was used to estimate groundwater flow directions and pond-internal processes.
Results
At KR only, the upper 10 cm of the kettle hole sediment reflected the relative element composition of the eroded terrestrial soils. The sediment from both kettle holes was enriched in Ca, Fe, K, and P compared to topsoils, indicating several possible processes including the input of clay and silt sized particles enriched in these elements, fertilizer input, and pond-internal processes including biogenic calcite and hydroxyapatite precipitation, Fe-P binding (KR), FeSx formation (RG), and elemental fixation and deposition via floating macrophytes (RG). High Ca concentrations in the kettle hole water indicated a high input of Ca from shallow groundwater inflow, while Ca precipitation in the kettle hole water led to lower Ca concentration in groundwater outflow.
Conclusions
The considerable element losses in the surrounding soils and the inputs into the kettle holes should be addressed by comprehensive soil and water protection measures, i.e., avoiding tillage, fertilizing conservatively, and creating buffer zones.
Digitale Medien erlangen eine zunehmende Bedeutung für die Gestaltung von unterrichtlichen Lehr- und Lernprozessen (KMK, 2021; Scheiter, 2021). Die erfolgreiche Integration digitaler Medien und die qualitätsvolle Gestaltung digitalgestützten Unterrichts ist dabei abhängig von den digitalen Kompetenzen der beteiligten Lehrkräfte (KMK, 2021; Lachner et al., 2020). Lehrkräftefortbildungen zu Themen digitaler Medien sind in diesem Kontext von großer Relevanz. Die Teilnahme an Fortbildungen zu digitalen Themen kann zur Förderung der (selbsteingeschätzten) digitalen Kompetenzen sowie des digitalgestützten Unterrichts beitragen (KMK, 2021; SWK, 2022). Die Zusammenhänge zwischen Lehrkräftefortbildungen, Kompetenzen und Handeln von Lehrkräften werden auf theoretischer Ebene im Modell der Determinanten und Konsequenzen der professionellen Kompetenz von Lehrkräften nach Kunter et al. (2011) beschrieben. Allerdings ist bislang ungeklärt, inwiefern die für allgemeine Lehrkräftefortbildungen formulierten Zusammenhänge auch auf den digitalen Kontext übertragbar sind. Bisher weisen nur wenige empirische Ergebnisse darauf hin, dass digitalbezogene Lehrkräftefortbildungen mit selbsteingeschätzten digitalen Kompetenzen (z. B. Mayer et al., 2021; Ning et al., 2022; Reisoğlu, 2022) und dem digitalgestützten Unterrichtshandeln zusammenhängen (z. B. Alt, 2018; Gisbert Cervera & Lázaro Cantabrana, 2015). Eine zentrale Rolle für qualitätsvolles Unterrichtshandeln spielen die Handlungskompetenzen von Lehrkräften (Kunter et al., 2011). Auch im digitalen Kontext sind (selbsteingeschätzte) Kompetenzen von Lehrkräften für das unterrichtliche Handeln mit digitalen Medien relevant (z. B. Hatlevik, 2017; Spiteri & Rundgren, 2020). Eine systematische Darstellung von Kompetenzen von Lehrkräften für den unterrichtsbezogenen Einsatz digitaler Medien leistet der European Framework for the Digital Competence of Educators (DigCompEdu; Redecker & Punie, 2017). Jedoch liegen bisher nur wenige empirische Forschungsarbeiten zur Validierung dieses Rahmenmodells vor (z. B. Antonietti et al., 2022). Dabei bietet das DigCompEdu-Modell im Vergleich zu anderen Kompetenzmodellen wie beispielsweise dem Modell des Technological Pedagogical Content Knowledge (TPACK; Mishra & Koehler, 2006) einen differenzierten Blick auf verschiedene Kompetenzdimensionen.
Die aufgezeigten Forschungslücken aufnehmend, befasst sich die vorliegende Dissertation mit den Faktoren, die zu einer hohen Unterrichtsqualität im digitalgestützten Unterricht beitragen. Die drei empirischen Studien dieser Dissertation behandeln aus verschiedenen Perspektiven die Zusammenhänge zwischen der Teilnahme an digitalbezogenen Lehrkräftefortbildungen, den selbsteingeschätzten digitalen Kompetenzen von Lehrkräften und der selbstberichteten digitalgestützten Unterrichtsqualität. Die Studien orientieren sich dabei theoriegeleitet an den Annahmen des Modells der Determinanten und Konsequenzen der professionellen Kompetenz von Lehrkräften nach Kunter et al. (2011).
Studie 1 widmet sich der Frage, inwieweit die Teilnahme an digitalbezogenen Fortbildungen und Lehrkräftekooperationen im digitalen Kontext mit selbsteingeschätzten digitalen Kompetenzen, Interesse am digitalgestützten Unterrichten und qualitätsvollem Unterrichten mit digitalen Medien zusammenhängen. Die Ergebnisse manifester Pfadmodelle verdeutlichen, dass die Teilnahme an digitalbezogenen Fortbildungen und Kooperationen mit hohen digitalen Kompetenzen, einem hohen Interesse am digitalgestützten Unterrichten und einem selbstberichteten häufigen Einsatz digitaler Medien zur Umsetzung qualitätsvollen Unterrichtens (kognitive Aktivierung und Individualisierung) einhergingen. Digitalgestütztes Unterrichtshandeln wurde in bisherigen empirischen Studien vorrangig über die Nutzungshäufigkeit digitaler Medien im Unterricht erhoben, welche jedoch keine Rückschlüsse auf die Qualität des Einsatzes zulässt (Lachner et al., 2020; Scheiter, 2021). Der qualitätsvolle Einsatz digitaler Medien entlang der drei generischen Basisdimensionen (Klieme et al., 2009) wird daher in allen drei Studien der Dissertation berücksichtigt. In Studie 1 konnte zudem gezeigt werden, dass die selbsteingeschätzten digitalen Kompetenzen im Bereich TPACK die querschnittlichen Zusammenhänge zwischen der Teilnahmehäufigkeit von Lehrkräften an digitalbezogenen Fortbildungen und der Nutzungshäufigkeit digitaler Medien zur Umsetzung von kognitiver Aktivierung und Individualisierung vermitteln.
In Studie 2 wurden Skalen zur Erfassung selbsteingeschätzter digitaler Kompetenzen basierend auf dem DigCompEdu-Modell entwickelt und getestet. Konkret wurde dabei die Kompetenzdimension der Lernerorientierung mit den Subdimensionen Differenzierung und Aktive Einbindung von Schüler*innen in den Blick genommen. Die Ergebnisse der durchgeführten Strukturgleichungsmodellierungen legen eine bifaktorielle Faktorstruktur nahe, die sowohl die zwei theoretisch angenommenen Subdimensionen repräsentiert als auch einen generellen Faktor beinhaltet, der sich als übergreifende Lernerorientierung interpretiert lässt. Die selbsteingeschätzten digitalen Kompetenzen in Bereich der Lernerorientierung standen in signifikant positivem Zusammenhang mit dem selbstberichteten Einsatz digitaler Medien zur selbstberichteten Umsetzung qualitätsvollen Unterrichtshandelns (Klassenführung, kognitive Aktivierung und konstruktive Unterstützung).
Studie 3 führt die Themenfelder der Fortbildungen und der Kompetenzen im digitalen Kontext zusammen und befasst sich mit dem Zusammenhang zwischen Fortbildungsthemen und digitalen Kompetenzen. Ergebnisse von Pfadmodellierungen zeigen, dass die Teilnahme an digitalbezogenen Fortbildungen zu den technologisch-pädagogisch-inhaltlichen Themen „Computergestützte Förderung der Schüler*innen“ und „Fachspezifische Unterrichtsentwicklung mit digitalen Medien“ mit dem selbstberichteten qualitätsvollen Einsatz digitaler Medien zur kognitiven Aktivierung und konstruktiven Unterstützung einhergehen. Diese Befunde stärken die Annahme, dass Lehrkräfte für einen qualitätsvollen Einsatz digitaler Medien sowohl technologische als auch pädagogisch didaktische Kompetenzen benötigen (Lipowsky & Rzejak, 2021; Mishra & Koehler, 2006; Scheiter & Lachner, 2019) und Fortbildungen folglich technologische mit unterrichtspraktischen Inhalten kombinieren sollten (Bonnes et al., 2022). Zudem zeigt die Studie basierend auf den theoretischen Annahmen von Kunter et al. (2011), dass selbsteingeschätzte digitale Kompetenzen von Lehrkräften die Zusammenhänge zwischen der Teilnahmehäufigkeit an digitalbezogenen Fortbildungen und der selbstberichteten digitalgestützten Unterrichtsqualität vermittelten.
In der abschließenden Gesamtdiskussion der Dissertation werden die Befunde vor dem Hintergrund des dargelegten Forschungsstandes und hinsichtlich der Forschungslücken diskutiert und auf Grundlage der Befunde der drei Studien forschungs- und praxisrelevante Implikationen abgeleitet.
Zum dreißigjährigen Bestehen des Kommunalwissenschaftlichen Instituts an der Universität Potsdam vereint dieser Jubiläumsband kurze Aufsätze von ehemaligen und aktuellen Vorstandsmitgliedern, von Ehrenmitgliedern des Vorstands, langjährigen wissenschaftlichen Mitarbeitern des Instituts und aktuellen wissenschaftlichen Kooperationspartnern. Die insgesamt zwölf Beiträge befassen sich mit den Kommunalwissenschaften und der Geschichte des Kommunalwissenschaftlichen Instituts, mit aktuellen kommunalwissenschaftlichen Fragestellungen und wissenschaftlichen Kooperationen des KWI. Der vom KWI-Vorstand herausgegebene Band soll einen breiten Blick auf 30 Jahre Kommunalwissenschaften in Brandenburg und an der Universität Potsdam werfen und einen Ausblick auf zukünftige kommunalwissenschaftliche Forschung geben.
Examining the dissemination of evidence on social media, we analyzed the discourse around eight visible scientists in the context of COVID-19. Using manual (N = 1,406) and automated coding (N = 42,640) on an account-based tracked Twitter/X dataset capturing scientists’ activities and eliciting reactions over six 2-week periods, we found that visible scientists’ tweets included more scientific evidence. However, public reactions contained more anecdotal evidence. Findings indicate that evidence can be a message characteristic leading to greater tweet dissemination. Implications for scientists, including explicitly incorporating scientific evidence in their communication and examining evidence in science communication research, are discussed.
Simple Summary Gliomas are heterogenous types of cancer, therefore the therapy should be personalized and targeted toward specific pathways. We developed a methodology that corrected strong batch effects from The Cancer Genome Atlas datasets and estimated glioma grade-specific co-enrichment mechanisms using machine learning. Our findings created hypotheses for annotations, e.g., pathways, that should be considered as therapeutic targets. Gliomas develop and grow in the brain and central nervous system. Examining glioma grading processes is valuable for improving therapeutic challenges. One of the most extensive repositories storing transcriptomics data for gliomas is The Cancer Genome Atlas (TCGA). However, such big cohorts should be processed with caution and evaluated thoroughly as they can contain batch and other effects. Furthermore, biological mechanisms of cancer contain interactions among biomarkers. Thus, we applied an interpretable machine learning approach to discover such relationships. This type of transparent learning provides not only good predictability, but also reveals co-predictive mechanisms among features. In this study, we corrected the strong and confounded batch effect in the TCGA glioma data. We further used the corrected datasets to perform comprehensive machine learning analysis applied on single-sample gene set enrichment scores using collections from the Molecular Signature Database. Furthermore, using rule-based classifiers, we displayed networks of co-enrichment related to glioma grades. Moreover, we validated our results using the external glioma cohorts. We believe that utilizing corrected glioma cohorts from TCGA may improve the application and validation of any future studies. Finally, the co-enrichment and survival analysis provided detailed explanations for glioma progression and consequently, it should support the targeted treatment.
We present the discovery of a new double-detonation progenitor system consisting of a hot subdwarf B (sdB) binary with a white dwarf companion with a P (orb) = 76.34179(2) minutes orbital period. Spectroscopic observations are consistent with an sdB star during helium core burning residing on the extreme horizontal branch. Chimera light curves are dominated by ellipsoidal deformation of the sdB star and a weak eclipse of the companion white dwarf. Combining spectroscopic and light curve fits, we find a low-mass sdB star, M (sdB) = 0.383 +/- 0.028 M (circle dot) with a massive white dwarf companion, M (WD) = 0.725 +/- 0.026 M (circle dot). From the eclipses we find a blackbody temperature for the white dwarf of 26,800 K resulting in a cooling age of approximate to 25 Myr whereas our MESA model predicts an sdB age of approximate to 170 Myr. We conclude that the sdB formed first through stable mass transfer followed by a common envelope which led to the formation of the white dwarf companion approximate to 25 Myr ago. Using the MESA stellar evolutionary code we find that the sdB star will start mass transfer in approximate to 6 Myr and in approximate to 60 Myr the white dwarf will reach a total mass of 0.92 M (circle dot) with a thick helium layer of 0.17 M (circle dot). This will lead to a detonation that will likely destroy the white dwarf in a peculiar thermonuclear supernova. PTF1 J2238+7430 is only the second confirmed candidate for a double-detonation thermonuclear supernova. Using both systems we estimate that at least approximate to 1% of white dwarf thermonuclear supernovae originate from sdB+WD binaries with thick helium layers, consistent with the small number of observed peculiar thermonuclear explosions.
The West Burma Terrane (WBT) is a small terrane bounded to the east by the Asian Sibumasu Block and to the west by the Indo-Burman Ranges (IBR), the latter being an exhumed accretionary prism that formed during subduction of Indian oceanic lithosphere beneath Asia. Understanding the geological history of the WBT is important for reconstruction of the closure history of the Tethys Ocean and India-Asia collision. Currently there are major discrepancies in the proposed timings of collision between the WBT with both India and Asia; whether the WBT collided with India or Asia first is debated, and proposed timings of collisions stretch from the Mesozoic to the Cenozoic. We undertook a multi-technique provenance study involving petrography, detrital zircon U-Pb and Hf analyses, rutile U-Pb analyses and Sr-Nd bulk rock analyses on sediments of the Central Myanmar Basins of the WBT. We determined that the first arrival of Asian material into the basin occurred after the earliest late Eocene and by the early Oligocene, thus placing a minimum constraint on the timing of WBT-Asia collision. Our low temperature thermochronological study of the IBR records two periods of exhumation, in the early-middle Eocene, and at the Oligo-Miocene boundary. The Eocene event may be associated with the collision of the WBT with India. The later event at the Oligo-Miocene boundary may be associated with changes in wedge dynamics resulting from increased sediment supply to the system; however a number of other possible causes provide equally plausible explanations for both events.
The aim of this study was to investigate the effects of listening to preferred music during a warm up or exercise, on performance during a 6-min all-out exercise test (6-MT) in young adult males. Twenty-five healthy males volunteered to participate in this study. Following a within subject design, participants performed three test conditions (MDT: music during the test; MDW: music during the warm-up; WM: without music) in random order. Outcomes included mean running speed over the 6-min test (MRS6), total distance covered (TDC), heart rate responses (HRpeak, HRmean), blood lactate (3-min after the test), and the rating of perceived exertion (RPE); additionally, feeling scale scores were recorded. Listening to preferred music during running resulted in significant TDC (Delta up arrow 10%, p=0.006, ES=0.80) and MRS6 (Delta up arrow 14%, p=0.012, ES=1.02) improvement during the 6-MT, improvement was also noted for the warm-up with music condition (TDC:Delta up arrow 8%, p=0.028, ES=0.63; MRS6:Delta up arrow 8%, p=0.032, ES=0.61). A similar reverse "J-shaped" pacing profile was detected during the three conditions. Blood lactate was lower in the MDT condition by 8% (p=0.01, ES=1.10), but not the MDW condition, compared to MW. In addition, no statistically significant differences were found between the test sessions for the HR, RPE, and feeling scale scores. In conclusion, listening to music during exercise testing would be more beneficial for optimal TDC and MRS6 performances compared to MDW and WM.
This study focuses on three key aspects: (a) crude throat swab samples in a viral transport medium (VTM) as templates for RT-LAMP reactions; (b) a biotinylated DNA probe with enhanced specificity for LFA readouts; and (c) a digital semi-quantification of LFA readouts. Throat swab samples from SARS-CoV-2 positive and negative patients were used in their crude (no cleaning or pre-treatment) forms for the RT-LAMP reaction. The samples were heat-inactivated but not treated for any kind of nucleic acid extraction or purification. The RT-LAMP (20 min processing time) product was read out by an LFA approach using two labels: FITC and biotin. FITC was enzymatically incorporated into the RT-LAMP amplicon with the LF-LAMP primer, and biotin was introduced using biotinylated DNA probes, specifically for the amplicon region after RT-LAMP amplification. This assay setup with biotinylated DNA probe-based LFA readouts of the RT-LAMP amplicon was 98.11% sensitive and 96.15% specific. The LFA result was further analysed by a smartphone-based IVD device, wherein the T-line intensity was recorded. The LFA T-line intensity was then correlated with the qRT-PCR Ct value of the positive swab samples. A digital semi-quantification of RT-LAMP-LFA was reported with a correlation coefficient of R2 = 0.702. The overall RT-LAMP-LFA assay time was recorded to be 35 min with a LoD of three RNA copies/µL (Ct-33). With these three advancements, the nucleic acid testing-point of care technique (NAT-POCT) is exemplified as a versatile biosensor platform with great potential and applicability for the detection of pathogens without the need for sample storage, transportation, or pre-processing.
Urbanization promotes specific bacteria in freshwater microbiomes including potential pathogens
(2022)
Freshwater ecosystems are characterized by complex and highly dynamic microbial communities that are strongly structured by their local environment and biota. Accelerating urbanization and growing city populations detrimentally alter freshwater environments. To determine differences in freshwater microbial communities associated with urban-ization, full-length 16S rRNA gene PacBio sequencing was performed in a case study from surface waters and sedi-ments from a wastewater treatment plant, urban and rural lakes in the Berlin-Brandenburg region, Northeast Germany. Water samples exhibited highly habitat specific bacterial communities with multiple genera showing clear urban signatures. We identified potentially harmful bacterial groups associated with environmental parameters specific to urban habitats such as Alistipes, Escherichia/Shigella, Rickettsia and Streptococcus. We demonstrate that urban-ization alters natural microbial communities in lakes and, via simultaneous warming and eutrophication and creates favourable conditions that promote specific bacterial genera including potential pathogens. Our findings are evidence to suggest an increased potential for long-term health risk in urbanized waterbodies, at a time of rapidly expanding global urbanization. The results highlight the urgency for undertaking mitigation measures such as targeted lake restoration projects and sustainable water management efforts.
The physiological dependence of animals on dietary intake of vitamins, amino acids, and fatty acids is ubiquitous. Sharp differences in the availability of these vital dietary biomolecules among different resources mean that consumers must adopt a range of strategies to meet their physiological needs. We review the emerging work on omega-3 long-chain polyunsaturated fatty acids, focusing predominantly on predator-prey interactions, to illustrate that trade-off between capacities to consume resources rich in vital biomolecules and internal synthesis capacity drives differences in phenotype and fitness of consumers. This can then feedback to impact ecosystem functioning. We outline how focus on vital dietary biomolecules in eco-eco-devo dynamics can improve our understanding of anthropogenic changes across multiple levels of biological organization.
The study addresses the question, if observed changes in terms of Arctic-midlatitude linkages during winter are driven by Arctic Sea ice decline alone or if the increase of global sea surface temperatures plays an additional role. We compare atmosphere-only model experiments with ECHAM6 to ERA-Interim Reanalysis data. The model sensitivity experiment is implemented as a set of four combinations of sea ice and sea surface temperature boundary conditions. Atmospheric circulation regimes are determined and evaluated in terms of their cyclone and blocking characteristics and changes in frequency during winter. As a prerequisite, ECHAM6 reproduces general features of circulation regimes very well. Tropospheric changes induced by the change of boundary conditions are revealed and further impacts on the large-scale circulation up into the stratosphere are investigated. In early winter, the observed increase of atmospheric blocking in the region between Scandinavia and the Urals are primarily related to the changes in sea surface temperatures. During late winter, we f nd a weakened polar stratospheric vortex in the reanalysis that further impacts the troposphere. In the model sensitivity study a climatologically weakened polar vortex occurs only if sea ice is reduced and sea surface temperatures are increased together. This response is delayed compared to the reanalysis. The tropospheric response during late winter is inconclusive in the model, which is potentially related to the weak and delayed response in the stratosphere. The model experiments do not reproduce the connection between early and late winter as interpreted from the reanalysis. Potentially explaining this mismatch, we identify a discrepancy of ECHAM6 to reproduce the weakening of the stratospheric polar vortex through blocking induced upward propagation of planetary waves.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
In real-world scene perception, human observers generate sequences of fixations to move image patches into the high-acuity center of the visual field. Models of visual attention developed over the last 25 years aim to predict two-dimensional probabilities of gaze positions for a given image via saliency maps. Recently, progress has been made on models for the generation of scan paths under the constraints of saliency as well as attentional and oculomotor restrictions. Experimental research demonstrated that task constraints can have a strong impact on viewing behavior. Here, we propose a scan-path model for both fixation positions and fixation durations, which include influences of task instructions and interindividual differences. Based on an eye-movement experiment with four different task conditions, we estimated model parameters for each individual observer and task condition using a fully Bayesian dynamical modeling framework using a joint spatial-temporal likelihood approach with sequential estimation. Resulting parameter values demonstrate that model properties such as the attentional span are adjusted to task requirements. Posterior predictive checks indicate that our dynamical model can reproduce task differences in scan-path statistics across individual observers.
INTRODUCTION:
We investigated the impact of changes in lifestyle habits on colorectal cancer (CRC) risk in a multicountry European cohort.
METHODS:
We used baseline and follow-up questionnaire data from the European Prospective Investigation into Cancer cohort to assess changes in lifestyle habits and their associations with CRC development. We calculated a healthy lifestyle index (HLI) score based on smoking status, alcohol consumption, body mass index, and physical activity collected at the 2 time points. HLI ranged from 0 (most unfavorable) to 16 (most favorable). We estimated the association between HLI changes and CRC risk using Cox regression models and reported hazard ratios (HR) with 95% confidence intervals (CI).
RESULTS:
Among 295,865 participants, 2,799 CRC cases were observed over a median of 7.8 years. The median time between questionnaires was 5.7 years. Each unit increase in HLI from the baseline to the follow-up assessment was associated with a statistically significant 3% lower CRC risk. Among participants in the top tertile at baseline (HLI > 11), those in the bottom tertile at follow-up (HLI <= 9) had a higher CRC risk (HR 1.34; 95% CI 1.02-1.75) than those remaining in the top tertile. Among individuals in the bottom tertile at baseline, those in the top tertile at follow-up had a lower risk (HR 0.77; 95% CI 0.59-1.00) than those remaining in the bottom tertile.
DISCUSSION:
Improving adherence to a healthy lifestyle was inversely associated with CRC risk, while worsening adherence was positively associated with CRC risk. These results justify and support recommendations for healthy lifestyle changes and healthy lifestyle maintenance for CRC prevention.
We consider a system of noninteracting particles on a line with initial positions distributed uniformly with density ? on the negative half-line. We consider two different models: (i) Each particle performs independent Brownian motion with stochastic resetting to its initial position with rate r and (ii) each particle performs run -and-tumble motion, and with rate r its position gets reset to its initial value and simultaneously its velocity gets randomized. We study the effects of resetting on the distribution P(Q, t) of the integrated particle current Q up to time t through the origin (from left to right). We study both the annealed and the quenched current distributions and in both cases, we find that resetting induces a stationary limiting distribution of the current at long times. However, we show that the approach to the stationary state of the current distribution in the annealed and the quenched cases are drastically different for both models. In the annealed case, the whole distribution P-an(Q, t) approaches its stationary limit uniformly for all Q. In contrast, the quenched distribution P-qu(Q, t) attains its stationary form for Q < Q(crit)(t), while it remains time dependent for Q > Q(crit)(t). We show that Q(crit)(t) increases linearly with t for large t. On the scale where Q <; Q(crit)(t), we show that P-qu(Q, t) has an unusual large deviation form with a rate function that has a third-order phase transition at the critical point. We have computed the associated rate functions analytically for both models. Using an importance sampling method that allows to probe probabilities as tiny as 10-14000, we were able to compute numerically this nonanalytic rate function for the resetting Brownian dynamics and found excellent agreement with our analytical prediction.
The capillary-venous pathology cerebral cavernous malformation (CCM) is caused by loss of CCM1/Krev interaction trapped protein 1 (KRIT1), CCM2/MGC4607, or CCM3/PDCD10 in some endothelial cells. Mutations of CCM genes within the brain vasculature can lead to recurrent cerebral hemorrhages. Pharmacological treatment options are urgently needed when lesions are located in deeply-seated and in-operable regions of the central nervous system. Previous pharmacological suppression screens in disease models of CCM led to the discovery that treatment with retinoic acid improved CCM phenotypes. This finding raised a need to investigate the involvement of retinoic acid in CCM and test whether it has a curative effect in preclinical mouse models. Here, we show that components of the retinoic acid synthesis and degradation pathway are transcriptionally misregulated across disease models of CCM. We complemented this analysis by pharmacologically modifying retinoic acid levels in zebrafish and human endothelial cell models of CCM, and in acute and chronic mouse models of CCM. Our pharmacological intervention studies in CCM2-depleted human umbilical vein endothelial cells (HUVECs) and krit1 mutant zebrafish showed positive effects when retinoic acid levels were increased. However, therapeutic approaches to prevent the development of vascular lesions in adult chronic murine models of CCM were drug regiment-sensitive, possibly due to adverse developmental effects of this hormone. A treatment with high doses of retinoic acid even worsened CCM lesions in an adult chronic murine model of CCM. This study provides evidence that retinoic acid signaling is impaired in the CCM pathophysiology and suggests that modification of retinoic acid levels can alleviate CCM phenotypes.
Fetal alcohol-spectrum disorder (FASD) is underdiagnosed and often misdiagnosed as attention-deficit/hyperactivity disorder (ADHD). Here, we develop a screening tool for FASD in youth with ADHD symptoms. To develop the prediction model, medical record data from a German University outpatient unit are assessed including 275 patients aged 0-19 years old with FASD with or without ADHD and 170 patients with ADHD without FASD aged 0-19 years old. We train 6 machine learning models based on 13 selected variables and evaluate their performance. Random forest models yield the best prediction models with a cross-validated AUC of 0.92 (95% confidence interval [0.84, 0.99]). Follow-up analyses indicate that a random forest model with 6 variables - body length and head circumference at birth, IQ, socially intrusive behaviour, poor memory and sleep disturbance - yields equivalent predictive accuracy. We implement the prediction model in a web-based app called FASDetect - a user-friendly, clinically scalable FASD risk calculator that is freely available at https://fasdetect.dhc-lab.hpi.de.
Purpose
Due to the increasing application of genome analysis and interpretation in medical disciplines, professionals require adequate education. Here, we present the implementation of personal genotyping as an educational tool in two genomics courses targeting Digital Health students at the Hasso Plattner Institute (HPI) and medical students at the Technical University of Munich (TUM).
Methods
We compared and evaluated the courses and the students ' perceptions on the course setup using questionnaires.
Results
During the course, students changed their attitudes towards genotyping (HPI: 79% [15 of 19], TUM: 47% [25 of 53]). Predominantly, students became more critical of personal genotyping (HPI: 73% [11 of 15], TUM: 72% [18 of 25]) and most students stated that genetic analyses should not be allowed without genetic counseling (HPI: 79% [15 of 19], TUM: 70% [37 of 53]). Students found the personal genotyping component useful (HPI: 89% [17 of 19], TUM: 92% [49 of 53]) and recommended its inclusion in future courses (HPI: 95% [18 of 19], TUM: 98% [52 of 53]).
Conclusion
Students perceived the personal genotyping component as valuable in the described genomics courses. The implementation described here can serve as an example for future courses in Europe.
In this essay I argue that while research in Jewish studies over the last several decades has done much to erode the historical narrative of Jewish/non-Jewish separation and detachment, it has also raised various questions pertaining to the outcome of Jewish/non-Jewish interactions and coexistence as well as the contours of Jewish difference. I contend that employing the concepts of conviviality, ethnic/religious/national indifference, and similarity will greatly facilitate answering these questions.
Habsburg Central Europe
(2024)
Central Europe is characterized by linguistic and cultural density as well as by endogenous and exogenous cultural influences. These constellations were especially visible in the former Habsburg Empire, where they influenced the formation of individual and collective identities. This led not only to continual crises and conflicts, but also to an equally enormous creative potential as became apparent in the culture of the fin-de-siècle.
At the junction of greenhouse and icehouse climate states, the Eocene-Oligocene Transition (EOT) is a key moment in Cenozoic climate history. While it is associated with severe extinctions and biodiversity turnovers on land, the role of terrestrial climate evolution remains poorly resolved, especially the associated changes in seasonality. Some paleobotanical and geochemical continental records in parts of the Northern Hemisphere suggest the EOT is associated with a marked cooling in winter, leading to the development of more pronounced seasons (i.e., an increase in the mean annual range of temperature, MATR). However, the MATR increase has been barely studied by climate models and large uncertainties remain on its origin, geographical extent and impact. In order to better understand and describe temperature seasonality changes between the middle Eocene and the early Oligocene, we use the Earth system model IPSL-CM5A2 and a set of simulations reconstructing the EOT through three major climate forcings: pCO(2) decrease (1120, 840 and 560 ppm), the Antarctic ice-sheet (AIS) formation and the associated sea-level decrease. Our simulations suggest that pCO(2) lowering alone is not sufficient to explain the seasonality evolution described by the data through the EOT but rather that the combined effects of pCO(2) , AIS formation and increased continentality provide the best data-model agreement.pCO(2) decrease induces a zonal pattern with alternating increasing and decreasing seasonality bands particularly strong in the northern high latitudes (up to 8 degrees C MATR increase) due to sea-ice and surface albedo feedback. Conversely, the onset of the AIS is responsible for a more constant surface albedo yearly, which leads to a strong decrease in seasonality in the southern midlatitudes to high latitudes (> 40 degrees S). Finally, continental areas that emerged due to the sea-level lowering cause the largest increase in seasonality and explain most of the global heterogeneity in MATR changes (1MATR) patterns. The Delta MATR patterns we reconstruct are generally consistent with the variability of the EOT biotic crisis intensity across the Northern Hemisphere and provide insights on their underlying mechanisms.
We present a detailed spectroscopic and timing analysis of X-ray observations of the bright pulsar PSR B0656+14. The observations were obtained simultaneously with eROSITA and XMM-Newton during the calibration and performance verification phase of the Spektrum-Roentgen-Gamma mission (SRG). The analysis of the 100 ks deep observation of eROSITA is supported by archival observations of the source, including XMM-Newton, NuSTAR, and NICER. Using XMM-Newton and NICER, we first established an X-ray ephemeris for the time interval 2015 to 2020, which connects all X-ray observations in this period without cycle count alias and phase shifts. The mean eROSITA spectrum clearly reveals an absorption feature originating from the star at 570 eV with a Gaussian sigma of about 70 eV that was tentatively identified in a previous long XMM-Newton observation. A second previously discussed absorption feature occurs at 260-265 eV and is described here as an absorption edge. It could be of atmospheric or of instrumental origin. These absorption features are superposed on various emission components that are phenomenologically described here as the sum of hot (120 eV) and cold (65 eV) blackbody components, both of photospheric origin, and a power law with photon index Gamma = 2 from the magnetosphere. We created energy-dependent light curves and phase-resolved spectra with a high signal-to-noise ratio. The phase-resolved spectroscopy reveals that the Gaussian absorption line at 570 eV is clearly present throughout similar to 60% of the spin cycle, but it is otherwise undetected. Likewise, its parameters were found to be dependent on phase. The visibility of the line strength coincides in phase with the maximum flux of the hot blackbody. If the line originates from the stellar surface, it nevertheless likely originates from a different location than the hot polar cap. We also present three families of model atmospheres: a magnetized atmosphere, a condensed surface, and a mixed model. They were applied to the mean observed spectrum, whose continuum fit the observed data well. The atmosphere model, however, predicts distances that are too short. For the mixed model, the Gaussian absorption may be interpreted as proton cyclotron absorption in a field as high as 10(14) G, which is significantly higher than the field derived from the moderate observed spin-down.
Am Ende der Globalisierung
(2021)
Die Globalisierung ist zur allgegenwärtigen Gewissheit geworden. Doch wie zutreffend ist das Konzept »Globalisierung«, wenn zeitgleich nationale Grenzen gestärkt und transnationale Freihandelszonen ausgeweitet werden, wenn auf unterschiedlichen scales Territorien überwunden und zugleich territoriale Abgrenzungen neu gesetzt werden? Aktuelle Veränderungen als Re-Figuration von Räumen zu verstehen, ermöglicht die Analyse und Diskussion widersprüchlicher, spannungsreicher und konflikthafter räumlicher Prozesse und ihrer alltäglichen Erfahrung. Die interdisziplinären Beiträge des Bandes liefern theoretische und empirische Analysen zu politischen, digitalen und alltäglichen Räumen im Konzept der Re-Figuration.
Zimzum
(2023)
Zimzum is the kabbalistic idea that God created the world by limiting his omnipresence. Zimzum originated in the teachings of the sixteenth-century Jewish mystic Isaac Luria and here, Christoph Schulte follows its traces across the Jewish and Christian intellectual history of Europe and North America over four centuries.
The Hebrew word zimzum originally means “contraction,” “withdrawal,” “retreat,” “limitation,” and “concentration.” In Kabbalah, zimzum is a term for God’s self-limitation, done before creating the world to create the world. Jewish mystic Isaac Luria coined this term in Galilee in the sixteenth century, positing that the God who was “Ein-Sof,” unlimited and omnipresent before creation, must concentrate himself in the zimzum and withdraw in order to make room for the creation of the world in God’s own center. At the same time, God also limits his infinite omnipotence to allow the finite world to arise. Without the zimzum there is no creation, making zimzum one of the basic concepts of Judaism.
The Lurianic doctrine of the zimzum has been considered an intellectual showpiece of the Kabbalah and of Jewish philosophy. The teaching of the zimzum has appeared in the Kabbalistic literature across Central and Eastern Europe, perhaps most famously in Hasidic literature up to the present day and in philosopher and historian Gershom Scholem’s epoch-making research on Jewish mysticism. The Zimzum has fascinated Jewish and Christian theologians, philosophers, and writers like no other Kabbalistic teaching. This can be seen across the philosophy and cultural history of the twentieth century as it gained prominence among such diverse authors and artists as Franz Rosenzweig, Hans Jonas, Isaac Bashevis Singer, Harold Bloom, Barnett Newman, and Anselm Kiefer.
This book follows the traces of the zimzum across the Jewish and Christian intellectual history of Europe and North America over more than four centuries, where Judaism and Christianity, theosophy and philosophy, divine and human, mysticism and literature, Kabbalah and the arts encounter, mix, and cross-fertilize the interpretations and appropriations of this doctrine of God’s self-entanglement and limitation.
Deriving mechanism-based pharmacodynamic models by reducing quantitative systems pharmacology models
(2023)
Quantitative systems pharmacology (QSP) models integrate comprehensive qualitative and quantitative knowledge about pharmacologically relevant processes. We previously proposed a first approach to leverage the knowledge in QSP models to derive simpler, mechanism-based pharmacodynamic (PD) models. Their complexity, however, is typically still too large to be used in the population analysis of clinical data. Here, we extend the approach beyond state reduction to also include the simplification of reaction rates, elimination of reactions, and analytic solutions. We additionally ensure that the reduced model maintains a prespecified approximation quality not only for a reference individual but also for a diverse virtual population. We illustrate the extended approach for the warfarin effect on blood coagulation. Using the model-reduction approach, we derive a novel small-scale warfarin/international normalized ratio model and demonstrate its suitability for biomarker identification. Due to the systematic nature of the approach in comparison with empirical model building, the proposed model-reduction algorithm provides an improved rationale to build PD models also from QSP models in other applications.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Am Ende der Globalisierung
(2021)
Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging.
Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes.
This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes.
Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations.
Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions.
The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification.
I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state.
Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss.
The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions.
The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.
Quantifying the resilience of vegetated ecosystems is key to constraining both present-day and future global impacts of anthropogenic climate change. Here we apply both empirical and theoretical resilience metrics to remotely-sensed vegetation data in order to examine the role of water availability and variability in controlling vegetation resilience at the global scale. We find a concise global relationship where vegetation resilience is greater in regions with higher water availability. We also reveal that resilience is lower in regions with more pronounced inter-annual precipitation variability, but find less concise relationships between vegetation resilience and intra-annual precipitation variability. Our results thus imply that the resilience of vegetation responds differently to water deficits at varying time scales. In view of projected increases in precipitation variability, our findings highlight the risk of ecosystem degradation under ongoing climate change.
Vegetation dynamics depend on both the amount of precipitation and its variability over time. Here, the authors show that vegetation resilience is greater where water availability is higher and where precipitation is more stable from year to year.
On his journey to the 'Orient' in 1856, the cultural entrepreneur from Vienna Ludwig August Frankl (1810–94) discussed the recent Hatt-ı-Hümayun, the new constitution promulgated by Sultan Abdülmecid I for the Ottoman Empire, with a Turkish state official. Frankl said that the European nations wondered whether the Ottoman Empire would be able to enact this revolutionary legislation, especially given the fact that they themselves had not yet implemented the full emancipation of religious minorities in their countries. 'Equal rights for all religions,' he exclaimed. 'While England orders this legislation for an, Your Mightiness will excuse the common expression, uncivilized nation, they do not comply with it in their own Parliament' (Ludwig August Frankl, Nach Jerusalem! (1858), i, 191). While criticizing England's hypocritical policy, Frankl, as an Austrian Jew, was actually referring to the discriminatory legislation against Jews in his own country, the Habsburg Monarchy. European Jews, whose legal emancipation had been postponed since the eighteenth century, were in awe of the Ottoman reforms that fundamentally reversed the relationship between Muslims and non-Muslims with the stroke of a pen. The chequered relationship between the Ottoman Empire and the European powers, or more precisely, the Habsburg Monarchy, from the nineteenth century until the First World War, is the topic of Barbara Haider-Wilson's comprehensive study Österreichs friedlicher Kreuzzug 1839–1917.
Der systematische Erwerb von Kenntnissen im Umgang mit Quellen in jüdischen Sprachen ist im Wissenschaftsbetrieb ein Desideratum. Das vorliegende Buch liefert hierzu eine praktische Einführung. Die ausgewählten handschriftlichen und gedruckten Quellen dokumentieren jüdische Geschichte von der Frühen Neuzeit bis ins 20. Jahrhundert in vier jüdischen Sprachen – Hebräisch, Jiddisch, Judendeutsch und Judenspanisch. Neben der jeweils als Faksimile wiedergegebenen Quelle werden eine Transkription und eine deutsche Übersetzung geboten. Das Buch ermöglicht nicht nur einen Einstieg in die Quellenkunde, insbesondere die Paläographie, sondern durch Kurzbeschreibungen der Texte auch einen Einblick in die Geschichte der Juden im Heiligen Römischen Reich und seinen Nachfolgestaaten. Das Lehrbuch liegt nun in einer überarbeiteten Neuauflage vor.
Zunz in Prag
(2021)
The paper addresses an under-researched chapter in the history of the Jewish Reform movement which is at the same time a commonly overlooked period in the biography of Leopold Zunz (1794–1886), one of the founding members of Wissenschaft des Judentums. By placing his eight-month appointment as a preacher to the Reform synagogue in Prague in its socio-political and biographical contexts, the article sheds new light at Zunz’s commitment for the religious renewal of Judaism. A schematic comparison between the development of the Reform movement in the German lands and the Habsburg Monarchy, at the beginning of the nineteenth century highlights the role of state involvement into internal Jewish affairs. Finally, the analysis of Zunz’s Synagogenordnung from 1836, according to the original manuscript from the National Library of Israel, allows a re-evaluation of the (Reform) synagogue as an institution for social disciplining of its members.
§ 70 Hohe See
(2024)
Die Reform der Haskala
(2021)
Der erste programmatische Text der Haskala, der durch die josephinischen Toleranzpatente inspirierte Traktat Naphtali Herz Weisels (= Hartwig Wessely, 1725–1805) Divre Schalom we-Emet, befasste sich mit der Neuorganisation des jüdischen Unterrichts. Bewusst ging Weisel in dieser Schrift, die mit grundsätzlichen Idealen traditioneller jüdischer Erziehung brach, vom Bibelvers"נחךונלרעלעיפרדוכ,םגייכקזןיאלסירוממהנ" („Erziehe den Knaben nach seinen Fähigkeiten, dann wird er auch im Alter nicht davon abweichen“, Sprüche22,6) aus. Im Unterschied zur traditionellen Betonung von Limmud, das heißt des Studiums von religiösen Schriften des Judentums, stellte Weisel Chinnuch, also Erziehung, ins Zentrumseiner Überlegungen.
Der Judenstaat Ararat
(2022)
Finger-based representation of numbers is a high-level cognitive strategy to assist numerical and arithmetic processing in children and adults. It is unclear whether this paradigm builds on simple perceptual features or comprises several attributes through embodiment. Here we describe the development and initial testing of an experimental setup to study embodiment during a finger-based numerical task using Virtual Reality (VR) and a low-cost tactile stimulator that is easy to build. Using VR allows us to create new ways to study finger-based numerical representation using a virtual hand that can be manipulated in ways our hand cannot, such as decoupling tactile and visual stimuli. The goal is to present a new methodology that can allow researchers to study embodiment through this new approach, maybe shedding new light on the cognitive strategy behind the finger-based representation of numbers. In this case, a critical methodological requirement is delivering precisely targeted sensory stimuli to specific effectors while simultaneously recording their behavior and engaging the participant in a simulated experience. We tested the device's capability by stimulating users in different experimental configurations. Results indicate that our device delivers reliable tactile stimulation to all fingers of a participant's hand without losing motion tracking quality during an ongoing task. This is reflected by an accuracy of over 95% in participants detecting stimulation of a single finger or multiple fingers in sequential stimulation as indicated by experiments with sixteen participants. We discuss possible application scenarios, explain how to apply our methodology to study the embodiment of finger-based numerical representations and other high-level cognitive functions, and discuss potential further developments of the device based on the data obtained in our testing.
Im Hochmittelalter entstehen Erzählungen, die etablierte literarische Formen und Traditionen neu verbinden: Sie sind volkssprachig, allegorisch und verwenden als Erzählform die erste Person, um in dieser Kombination, die sich zu einem die Grenzen der Einzelsprachen überschreitenden Erzähl-Format verfestigt, unterschiedlichste Themen aufzugreifen. Dieses Format, erstmals realisiert im altfranzösischen Roman de la Rose, wird die europäische Literatur mit Texten wie Dantes Divina Comedia, Guillaumes de Deguileville Pèlerinage de la Vie Humaine, William Langlands Pierce Plowman und Christines de Pizan Le Livre de la mutation de Fortune bis weit in die Neuzeit hinein prägen. Der in den Band einleitende Beitrag geht der Frage nach, ob das narrative Format dabei universell verwendet wird oder, z.B. im Rahmen der Liebesdichtung, spezifische Besonderheiten aufweist.
Perovskite semiconductors are an attractive option to overcome the limitations of established silicon based photovoltaic (PV) technologies due to their exceptional opto-electronic properties and their successful integration into multijunction cells. However, the performance of single- and multijunction cells is largely limited by significant nonradiative recombination at the perovskite/organic electron transport layer junctions. In this work, the cause of interfacial recombination at the perovskite/C-60 interface is revealed via a combination of photoluminescence, photoelectron spectroscopy, and first-principle numerical simulations. It is found that the most significant contribution to the total C-60-induced recombination loss occurs within the first monolayer of C-60, rather than in the bulk of C-60 or at the perovskite surface. The experiments show that the C-60 molecules act as deep trap states when in direct contact with the perovskite. It is further demonstrated that by reducing the surface coverage of C-60, the radiative efficiency of the bare perovskite layer can be retained. The findings of this work pave the way toward overcoming one of the most critical remaining performance losses in perovskite solar cells.
Cosmic-ray neutron sensing (CRNS) allows for the estimation of root-zone soil water content (SWC) at the scale of several hectares. In this paper, we present the data recorded by a dense CRNS network operated from 2019 to 2022 at an agricultural research site in Marquardt, Germany - the first multi-year CRNS cluster. Consisting, at its core, of eight permanently installed CRNS sensors, the cluster was supplemented by a wealth of complementary measurements: data from seven additional temporary CRNS sensors, partly co-located with the permanent ones; 27 SWC profiles (mostly permanent); two groundwater observation wells; meteorological records; and Global Navigation Satellite System reflectometry (GNSS-R). Complementary to these continuous measurements, numerous campaign-based activities provided data by mobile CRNS roving, hyperspectral im-agery via UASs, intensive manual sampling of soil properties (SWC, bulk density, organic matter, texture, soil hydraulic properties), and observations of biomass and snow (cover, depth, and density). The unique temporal coverage of 3 years entails a broad spectrum of hydro-meteorological conditions, including exceptional drought periods and extreme rainfall but also episodes of snow coverage, as well as a dedicated irrigation experiment. Apart from serving to advance CRNS-related retrieval methods, this data set is expected to be useful for vari-ous disciplines, for example, soil and groundwater hydrology, agriculture, or remote sensing. Hence, we show exemplary features of the data set in order to highlight the potential for such subsequent studies. The data are available at doi.org/10.23728/b2share.551095325d74431881185fba1eb09c95 (Heistermann et al., 2022b).
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
The color red has been implicated in a variety of social processes, including those involving mating. While previous research suggests that women sometimes wear red strategically to increase their attractiveness, the replicability of this literature has been questioned. The current research is a reasonably powered conceptual replication designed to strengthen this literature by testing whether women are more inclined to display the color red 1) during fertile (as compared with less fertile) days of the menstrual cycle, and 2) when expecting to interact with an attractive man (as compared with a less attractive man and with a control condition). Analyses controlled for a number of theoretically relevant covariates (relationship status, age, the current weather). Only the latter hypothesis received mixed support (mainly among women on hormonal birth control), whereas results concerning the former hypothesis did not reach significance. Women (N = 281) displayed more red when expecting to interact with an attractive man; findings did not support the prediction that women would increase their display of red on fertile days of the cycle. Findings thus suggested only mixed replicability for the link between the color red and psychological processes involving romantic attraction. They also illustrate the importance of further investigating the boundary conditions of color effects on everyday social processes.
Background: Patients with subjective cognitive decline (SCD) report memory deterioration and are at an increased risk of converting to Alzheimer's disease (AD) although psychophysical testing does not reveal any cognitive deficit.
Objective: Here, gustatory function is investigated as a potential predictor for an increased risk of progressive cognitive decline indicating higher AD risk in SCD.
Methods: Measures of smell and taste perception as well as neuropsychological data were assessed in patients with subjective cognitive decline (SCD): Subgroups with an increased likelihood of the progression to preclinical AD (SCD+) and those with a lower likelihood (SCD-) were compared to healthy controls (HC), patients with mild cognitive impairment and AD patients. The Sniffin' Sticks test contained 12 items with different qualities and taste was measured with 32 taste stripes (sweet, salty, bitter, sour) of different concentration.
Results: Only taste was able to distinguish between HC/SCD- and SCD+ patients.
Conclusion: This study provides a first hint of taste as a more sensitive marker than smell for detecting preclinical AD in SCD. Longitudinal observation of cognition and pathology are necessary to further evaluate taste perception as a predictor of pathological objective decline in cognition.
Shams et al. report that glioma patients' motor status is predicted accurately by diffusion MRI metrics along the corticospinal tract based on support vector machine method, reaching an overall accuracy of 77%. They show that these metrics are more effective than demographic and clinical variables.
Along tract statistics enables white matter characterization using various diffusion MRI metrics. These diffusion models reveal detailed insights into white matter microstructural changes with development, pathology and function. Here, we aim at assessing the clinical utility of diffusion MRI metrics along the corticospinal tract, investigating whether motor glioma patients can be classified with respect to their motor status. We retrospectively included 116 brain tumour patients suffering from either left or right supratentorial, unilateral World Health Organization Grades II, III and IV gliomas with a mean age of 53.51 +/- 16.32 years. Around 37% of patients presented with preoperative motor function deficits according to the Medical Research Council scale. At group level comparison, the highest non-overlapping diffusion MRI differences were detected in the superior portion of the tracts' profiles. Fractional anisotropy and fibre density decrease, apparent diffusion coefficient axial diffusivity and radial diffusivity increase. To predict motor deficits, we developed a method based on a support vector machine using histogram-based features of diffusion MRI tract profiles (e.g. mean, standard deviation, kurtosis and skewness), following a recursive feature elimination method. Our model achieved high performance (74% sensitivity, 75% specificity, 74% overall accuracy and 77% area under the curve). We found that apparent diffusion coefficient, fractional anisotropy and radial diffusivity contributed more than other features to the model. Incorporating the patient demographics and clinical features such as age, tumour World Health Organization grade, tumour location, gender and resting motor threshold did not affect the model's performance, revealing that these features were not as effective as microstructural measures. These results shed light on the potential patterns of tumour-related microstructural white matter changes in the prediction of functional deficits.