Refine
Has Fulltext
- yes (523) (remove)
Year of publication
- 2021 (523) (remove)
Document Type
- Doctoral Thesis (165)
- Postprint (154)
- Article (94)
- Part of Periodical (26)
- Working Paper (19)
- Master's Thesis (18)
- Monograph/Edited Volume (15)
- Review (10)
- Bachelor Thesis (7)
- Report (7)
Keywords
- USA (10)
- United States (9)
- moderne jüdische Geschichte (9)
- Christian Gottfried Ehrenberg (8)
- 20. Jahrhundert (7)
- 20th century (7)
- modern Jewish history (7)
- Logopädie (6)
- Zeitschrift (6)
- 19. Jahrhundert (5)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (50)
- Extern (47)
- Institut für Biochemie und Biologie (45)
- Institut für Geowissenschaften (39)
- Institut für Physik und Astronomie (34)
- Institut für Chemie (28)
- Institut für Romanistik (28)
- Institut für Umweltwissenschaften und Geographie (21)
- Vereinigung für Jüdische Studien e. V. (21)
- Department Psychologie (19)
Nicht nur der Lauf der Geschichte verändert sich, sondern auch Geschichtswissenschaft und -unterricht. An die Stelle des Auswendiglernens vorgegebener historischer Erzählungen trat im Laufe der Zeit zunehmend die Befähigung zu deren Dekonstruktion. Dafür müssen allerdings die Entstehungsbedingungen (geschichts-)wissenschaftlicher Erkenntnisse nachvollzogen werden können. Eine Möglichkeit, dies auf spielerische Art und Weise, aber auch auf Augenhöhe mit Lehrkräften und Historiker*innen zu tun, bieten Open-World- Spiele wie HORIZON ZERO DAWN (2017).
„Gut, besser, Vermesser.“
(2021)
Die Geschichte der Mikrobiologie als Laborwissenschaft des späten 19. Jahrhunderts hat für Ehrenberg keinen Platz. Unmodern, ja fehlerhaft scheinen die Befunde dieses „Humboldt en miniature“, der die Belebtheit von Wasser oder Luft mikroskopisch untersuchte, der Proben aus aller Welt sammelte und so zahlreiche „Infusorien“ genannte Mikroben sowie deren Effekte etwa bei Blutwundern beschrieb, deren Beteiligung an Infektionskrankheiten aber verneinte. Zugleich scheint sein ökologischer Blick auf den Mikrokosmos die Gegenwart auf überraschende Weise anzusprechen – einerseits weil Mikroben omnipräsentes Faszinosum wie Bedrohung bleiben, andererseits weil viele der Prämissen von Pasteur und Koch im Zeitalter der Genomik überholt erscheinen. Ausgehend von der zwiespältigen Position Ehrenbergs fragt dieser Artikel, warum er möglicherweise gerade deswegen spannender ist, als ihn die Historiographie bislang hat erscheinen lassen.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
Die aktuelle Politik der Europäischen Union hat im Umgang mit flüchtenden Menschen das Mittelmeer in ein Massengrab verwandelt. Dass auch im Jahr 2021 täglich Menschen an den EU-Außengrenzen sterben hängt dabei mit dem Ausbau von Sicherheitsmechanismen zum Zweck eines verstärkten Grenzschutzes zusammen. Durch Sicherheitsmechanismen wie bspw. den Ausbau von Frontex und die elektronische Erfassung von Ein- und Ausreisedaten schottet sich die EU dabei immer weiter ab während gleichzeitig die Thematik der Flucht und Migration eine zunehmende ‚Versicherheitlichung‘ erfährt.
Die vorliegende Arbeit geht davon aus, dass die Grundzüge der Versicherheitlichung von Flucht und Migration bereits im liberalen Staatsverständnis der EU angelegt sind. Mithilfe einer foucaultschen Diskursanalyse hinterfragt die Arbeit daher die historisch entstandenen und im Liberalismus inbegriffenen Vorannahmen über nicht-europäische Menschen und deren Fortentwicklung in die heutigen Politiken der EU. Dabei geht die Arbeit einerseits der Frage nach, wie sich die zunehmende Versicherheitlichung der Migration und der damit verbundene Umgang mit Nicht-Europäer*innen an den EU-Außengrenzen erklären lässt. Vertieft wird gefragt, inwieweit sich die konstruierten Wissensmuster über das europäische ‚wir‘ und die nicht-europäischen ‚Anderen‘ aus dem Liberalismus in der heutigen EU-Politik wiederfinden.
Auf Basis der Werke Michel Foucaults führt die Arbeit in die Entwicklung liberaler Staatlichkeit seit dem 17. Jahrhundert ein. Ergänzt werden diese Darstellungen um eine postkoloniale Perspektive, die eine Darstellung des liberalen Denkens über das europäische ‚Außen‘ vermittelt. Gemeinsam legen diese beiden Perspektiven die Strukturen liberalen Denkens offen, die im späteren Verlauf der Analyse in aktuellen EU-Dokumenten wiedererkannt werden. Als Analysedokumente dienen dabei sechs von der EU veröffentlichte Agenden, Verordnungen und Strategien, die die thematische Schnittstelle zwischen Sicherheit und Migration umfassen.
Die Ergebnisse zeigen, dass sich ein ‚Othering‘ - die historisch entstandene Gruppenbildung des homogen begriffenen europäischen ‚Wirs‘ gegenüber den nicht-europäischen ‚Anderen‘ - in der heutigen Politik der EU an deren Außengrenzen reproduziert. Das im 17. Jahrhundert entstandene Sicherheitsdenken des liberalen Staates wird über die Reproduktion bestimmter Wissensmuster in Form von ‚Stories‘ auf die heutigen EU-Außengrenzen übertragen. Nach ‚innen‘ handelt die EU dabei nach einem Grundsatz der ‚gemeinsamen Stärke‘ der europäischen Staaten bzw. der EU-Mitgliedstaaten, während nach ‚außen‘ eine zweckrationale Kooperation mit Drittstaaten verfolgt wird. Statt um die Wahrung von Menschenleben geht es damals wie heute v.a. um den Vorteil Europas bzw. der EU. Von diesen Ergebnissen ausgehend wird die Zunahme der Versicherheitlichung von Flucht und Migration an den EU-Außengrenzen durch die Reproduktion des geschichtlich entstandenen Sicherheitsdenkens erklärt.
העבודה הזאת עוסקת בנצרות הקדומה ובשלבי התפתחותה במאה הראשונה לספירה. הדגש בעבודה הוא על יחסי הגומלין בין הנצרות ל ארץ הקודש. המחקר הזה מציע קריאה אלטרנטיבית של המקורות העבריים הקדומים, שלפיה התנועה של ישו ההיסטורי נתפסת כתוצר של השיח הת יאולוגי והפוליטי באו תה תקופה בגליל וביהודה. הקריאה המוצעת מאתגרת את המגמה המקובלת בספרות המחקר, אשר נוטה להפריד לא רק בין ישו לבין הקשרו הדתי, אלא גם בין אלמנטים דתיים לפוליטיים בנצרות הקדומה. מטרת מחקר זה היא לאפשר הסתכלות מקומית על הנצרות של המאה הראשונה כחלק בלתי נפרד מן התמונה הפלסטינית הדתית והפוליטית הרחבה.
המחקר מחולק לחמשה פרקים. הפרק הראשון מציג רקע היסטורי לכיתות וזרמי היהדות בארץ הקודש במאה הראשונה מנקודת מבט דתית -פוליטית. הפרק הזה מעמיק את ההבנה של גווני התודעה ההיסטורית של אנשי ארץ הקודש במאה הראשונה לספירה. הפרק השני מציג את תולדות הנצרות של ישו ההיסטורי בשיטה תימטית , שמאירה את התהליך המורכב שישו והעם הפלסטיני-יהודי עבר בתקופה הזאת. שלושת הת ימות בפרק השני הן: ההלכה של ישו )כשרות ושבת(; המרד השקט )המשמעות הפוליטית של פסטיבל הפסח בירושלים עבור יהודי הבית השנ י(; והגאולה עלי אדמות על פי ישו המתואר בספרי הבשורה. הפרק השלישי מציג את תולדות הנצרות על פי פאולוס. הפרק מתמקד בהעברת ישו מירושלים לרומא, ודן בתוצאות המעבר הזה תוך הדגשת השלכותיו על הדור הנוצרי הראשון ועל ארץ הקודש.
הפרק הרביעי הוא פרק "הנצרות הפלסטינית" שמסכם את שלושת הפרקים הקודמים, ומציע קריאה פוסט- קולוניאלית של הנצרות הפלסטינית של ישו ההיסטורי )היהודי( אל מול הנצרות הרומית שהתבססה מאוחר יותר על הת יאולוגיה של פאולוס. הפרק הזה מסביר מדוע חשוב לחבר מחדש בין הדתי לפוליטי, ומדגיש כי ההשקפה האוניברסאלית הידועה של הנצרות לא הייתה חלק מתנועתו של ישו, מכיוון שישו ההיסטורי היה מונע על ידי היותו יהודי, עם תחושת שייכות גבוהה לקהילתו הפלסטינית -יהודית, ולא על ידי כך שהוא הבן של אלוהים. הפרק החמישי הוא פרק הסיום, ובו מוצג סיכום קצר, ויש בו סקירה של השלכות תיאורטיות של עבודת מחקר זו והמלצות למחקרים עתידיים.
Настоящий научно-практический комментарий и перевод Уголовного уложения Федеративной Республики Германия не ограничивается собственно трансформацией текста основного уголовного закона Германии на русский язык. Руководствуясь принципами функционального перевода, автор посредством точного и систематического перевода доводит до сведения читателя смысл уголовно-правовых норм. Научно-практический комментарий к Уголовному уложению Германии представляет собой постатейный комментарий, учитывающий как мнение законодателя, так и мнение правоприменительной практики Федерального Верховного суда и высших Земельных судов Германии, а также немецкой юридической доктрины по основным проблемам уголовного права. Необходимое внимание было уделено также дополнительному уголовному праву и уголовно-процессуальным проблемам. Таким образом, задачей настоящего издания является предоставить читателю возможность правильного языкового понимания и юридического толкования уголовно-правовых норм Германии.
Вступительная статья «Введение в уголовное право Германии» содержит общий обзор немецкого уголовного права. Особое внимание уделено развитию и источникам уголовного законодательства, а также доктрине уголовного права ФРГ.
Книга может заинтересовать практиков и правоведов, а также всех, кто в своей профессиональной деятельности или в процессе обучения сталкивается с уголовным правом Германии.
Innerhalb dieser Arbeit erfolgte die erstmalige systematische Untersuchung von Vinylsulfonsäureethylester (1a), Phenylvinylsulfon (1b), N-Benzyl-N-methylethensulfonamid (1c) in der FUJIWARA-MORITANI Reaktion (alternativ als DHR bezeichnet). Bei dieser übergangsmetallkatalysierten Reaktion erfolgt der Aufbau einer neuen C-C-Bindung unter der doppelten Aktivierung einer C-H-Bindung. Somit kann ein atomökonomischer Aufbau von Molekülen realisiert werden, da keine Beiprodukte in Form von Salzen entstehen. Als aromatischer Reaktant wurden Acetanilide (2) verwendet, damit eine regiospezifische Kupplung durch die katalysatordirigierende Acetamid-Gruppe (CDG) erfolgt. Für die Pd-katalysierte DHR wurde eine umfangreiche Optimierung durchgeführt und anschließend konnten neun verschieden, substituierte 2 mit 1a und sieben verschieden, substituierte 2 mit 1b funktionalisiert werden. Da eine Reaktion mit 1c ausblieb, erfolgte ein Wechsel auf eine Ru-katalysierte Methode für die DHR. Mit dieser Methode konnte 1c mit Acetaniliden funktionalisiert werden und das Spektrum der verwendeten 2, in Form von deaktivierenden Substituenten erweitert werden.
Im Anschluss wurden die sulfalkenylierten Acetanilide in weiterführenden Reaktionen untersucht. Hierfür wurde eine Reaktionssequenz bestehend aus einer DeacetylierungDiazotierung-Kupplungsreaktion verwendet, um die Acetamid-Gruppe in eine Abgangsgruppe zu überführen und danach in einer MATSUDA-HECK Reaktion zu kuppeln. Mit dieser Methode konnten mehrere 1,2-Dialkenylbenzole erhalten werden und die CDG ein weiteres Mal genutzt werden. Neben der Überführung der CDG in eine Abgangsgruppe konnte diese auch in die Synthese verschiedener Heterozyklen integriert werden. Dafür erfolgte zunächst eine 1,3-Zykloaddition durch deprotonierten Tosylmethylisocanid an der elektronenarmen Sulfalkenylgruppe zur Synthese von Pyrrolen. Anschließend erfolgte eine Kupplung der PyrrolFunktion und der CDG durch Zyklokondensation, wodurch Quinoline dargestellt wurden. Durch diese Synthesen konnten Schwefelanaloga des Naturstoffes Marinoquionolin A erhalten werden.
Ein weitere übergangsmetallkatalysierte C-H-Aktivierungsreaktion, die MATSUDA-HECK Reaktion, wurde genutzt, um 1b zu mit verschieden, subtituierten Diazoniumsalzen zu arylieren. Hier konnten zahlreichen Styrenylsulfone erhalten werden. Der erfolgreiche Einsatz der Vinylsulfonylverbindungen in der Kreuzmetathese konnte innerhalb dieser Arbeit nicht erreicht werden. Daher erfolgte die Synthese verschiedener dialkenylierter Sulfonamide. Hierfür wurde die Kettenlänge der Alkenyl-Gruppe am Schwefel zwischen 2-3 und am Stickstoff zwischen 3-4 variiert. Der Einsatz der dialkenylierten Sulfonamide erfolgte in den zuvor untersuchten C-H-Aktivierungsmethoden.
N-Allyl-N-phenylethensulfonamid (3) konnte erfolgreich in der DHR und HECK Reaktion funktionalisiert werden. Hierbei erfolgte eine methodenspezifische Kupplung in Abhängigkeit von der Elektronendichte der entsprechenden Alkenyl-Gruppe. Die DHR führte zur selektiven Arylierung der Vinyl-Gruppe und die HECK Reaktion zur Arylierung an der Allyl-Gruppe. Gemischte Produkte wurden nicht erhalten. Für die weiteren Diolefine wurde komplexe Produktgemische erhalten. Des Weiteren wurden die Diolefine in der Ringschlussmetathese untersucht und die entsprechenden Sultame in sehr guten Ausbeuten erhalten. Die Verwendung der Sultame in der C-H-Aktivierung war erfolglos. Es wird vermutet, dass für diese zweifachsubstituierten Sulfonamide die vorhandenen Reaktionsbedingungen optimiert werden müssen.
Abschließend wurden verschiedene, enantiomerenreine Olefine ausgehend von Levoglucosenon dargestellt. Hierfür wurde Levoglucosenon zunächst mit einem Allyl- und 3-Butenylgrignard Reagenz umgesetzt. Die entsprechenden Produkte wurden in moderaten Ausbeuten erhalten. Eine weitere Methode begann mit der Reduktion von Levoglucosenon zum Levoglucosenol. Dieser Alkohol wurde mit Allylbromid erfolgreich verethert. Neben der Untersuchungen zur Ethersynthese, erfolgte die Veresterung von Levoglucosenol mit verschiedenen Sulfonylchloriden zu den entsprechenden Sulfonsäureestern. Diese Olefine wurden in einer Dominometathesereaktion untersucht. Ausgehend vom Allyllevoglucosenylether erfolgte die Darstellung eines Dihydrofurans.
Die Bienaymé-Galton-Watson Prozesse können für die Untersuchung von speziellen und sich entwickelnden Populationen verwendet werden. Die Populationen umfassen Individuen, welche sich identisch, zufällig, selbstständig und unabhängig voneinander fortpflanzen und die jeweils nur eine Generation existieren. Die n-te Generation ergibt sich als zufällige Summe der Individuen der (n-1)-ten Generation. Die Relevanz dieser Prozesse begründet sich innerhalb der Historie und der inner- und außermathematischen Bedeutung. Die Geschichte der Bienaymé-Galton-Watson-Prozesse wird anhand der Entwicklung des Konzeptes bis heute dargestellt. Dabei werden die Wissenschaftler:innen verschiedener Disziplinen angeführt, die Erkenntnisse zu dem Themengebiet beigetragen und das Konzept in ihren Fachbereichen angeführt haben. Somit ergibt sich die außermathematische Signifikanz. Des Weiteren erhält man die innermathematische Bedeutsamkeit mittels des Konzeptes der Verzweigungsprozesse, welches auf die Bienaymé-Galton-Watson Prozesse zurückzuführen ist. Die Verzweigungsprozesse stellen eines der aussagekräftigsten Modelle für die Beschreibung des Populationswachstums dar. Darüber hinaus besteht die derzeitige Wichtigkeit durch die Anwendungsmöglichkeit der Verzweigungsprozesse und der Bienaymé-Galton-Watson Prozesse innerhalb der Epidemiologie. Es werden die Ebola- und die Corona-Pandemie als Anwendungsfelder angeführt. Die Prozesse dienen als Entscheidungsstütze für die Politik und ermöglichen Aussagen über die Auswirkungen von Maßnahmen bezüglich der Pandemien. Neben den Prozessen werden ebenfalls der bedingte Erwartungswert bezüglich diskreter Zufallsvariablen, die wahrscheinlichkeitserzeugende Funktion und die zufällige Summe eingeführt. Die Konzepte vereinfachen die Beschreibung der Prozesse und bilden somit die Grundlage der Betrachtungen. Außerdem werden die benötigten und weiterführenden Eigenschaften der grundlegenden Themengebiete und der Prozesse aufgeführt und bewiesen. Das Kapitel erreicht seinen Höhepunkt bei dem Beweis des Kritikalitätstheorems, wodurch eine Aussage über das Aussterben des Prozesses in verschiedenen Fällen und somit über die Aussterbewahrscheinlichkeit getätigt werden kann. Die Fälle werden anhand der zu erwartenden Anzahl an Nachkommen eines Individuums unterschieden. Es zeigt sich, dass ein Prozess bei einer zu erwartenden Anzahl kleiner gleich Eins mit Sicherheit ausstirbt und bei einer Anzahl größer als Eins, die Population nicht in jedem Fall aussterben muss. Danach werden einzelne Beispiele, wie der linear fractional case, die Population von Fibroblasten (Bindegewebszellen) von Mäusen und die Entstehungsfragestellung der Prozesse, angeführt. Diese werden mithilfe der erlangten Ergebnisse untersucht und einige ausgewählte zufällige Dynamiken werden im nachfolgenden Kapitel simuliert. Die Simulationen erfolgen durch ein in Python erstelltes Programm und werden mithilfe der Inversionsmethode realisiert. Die Simulationen stellen beispielhaft die Entwicklungen in den verschiedenen Kritikalitätsfällen der Prozesse dar. Zudem werden die Häufigkeiten der einzelnen Populationsgrößen in Form von Histogrammen angebracht. Dabei lässt sich der Unterschied zwischen den einzelnen Fällen bestätigen und es wird die Anwendungsmöglichkeit der Bienaymé-Galton-Watson Prozesse bei komplexeren Problemen deutlich. Histogramme bekräftigen, dass die einzelnen Populationsgrößen nur endlich oft vorkommen. Diese Aussage wurde von Galton aufgeworfen und in der Extinktions-Explosions-Dichotomie verwendet. Die dargestellten Erkenntnisse über das Themengebiet und die Betrachtung des Konzeptes werden mit einer didaktischen Analyse abgeschlossen. Die Untersuchung beinhaltet die Berücksichtigung der Fundamentalen Ideen, der Fundamentalen Ideen der Stochastik und der Leitidee „Daten und Zufall“. Dabei ergibt sich, dass in Abhängigkeit der gewählten Perspektive die Anwendung der Bienaymé-Galton-Watson Prozesse innerhalb der Schule plausibel ist und von Vorteil für die Schüler:innen sein kann. Für die Behandlung wird exemplarisch der Rahmenlehrplan für Berlin und Brandenburg analysiert und mit dem Kernlehrplan Nordrhein-Westfalens verglichen. Die Konzeption des Lehrplans aus Berlin und Brandenburg lässt nicht den Schluss zu, dass die Bienaymé-Galton-Watson Prozesse angewendet werden sollten. Es lässt sich feststellen, dass die zugrunde liegende Leitidee nicht vollumfänglich mit manchen Fundamentalen Ideen der Stochastik vereinbar ist. Somit würde eine Modifikation hinsichtlich einer stärkeren Orientierung des Lehrplans an den Fundamentalen Ideen die Anwendung der Prozesse ermöglichen. Die Aussage wird durch die Betrachtung und Übertragung eines nordrhein-westfälischen Unterrichtsentwurfes für stochastische Prozesse auf die Bienaymé-Galton-Watson Prozesse unterstützt. Darüber hinaus werden eine Concept Map und ein Vernetzungspentagraph nach von der Bank konzipiert um diesen Aspekt hervorzuheben.
Trotz der hohen innovationspolitischen Bedeutung der außeruniversitären Forschungseinrichtungen (AUF) sind sie bisher selten Gegenstand empirischer Untersuchungen. Keine der bisher vorliegenden Arbeiten legt ihren Fokus auf die Zusammenarbeit von Wissenschaftler:innen in Forschungsteams, obwohl wissenschaftliche Zusammenarbeit ein weitgehend unerforschtes Gebiet ist. Dies verwundert insofern, da gerade innovative und komplexe Aufgaben, wie sie im Bereich der Forschung bestehen, das kreative Potenzial Einzelner sowie eine gut funktionierende Kooperation der einzelnen Individuen benötigen. Die Zusammenarbeit von Wissenschaftler:innen in den AUF findet in einem kompetitiven Umfeld statt. Einerseits stehen die AUF auf Organisationsebene im Wettbewerb zueinander und konkurrieren um Forschungsgelder und wissenschaftliches Personal. Andererseits ist die kompetitive Einwerbung von Drittmitteln für Wissenschaftler:innen essentiell, um Leistungen, gemessen an hochrangigen Publikationen und Drittmittelquoten, für die eigene Karriere zu erbringen. Ein zunehmender Anteil an Drittmittelfinanzierung in den Einrichtungen hat zudem Auswirkungen auf die Personalpolitik und die Anzahl befristeter Arbeitsverhältnisse. Gleichzeitig wird Forschungsförderung häufig an Kollaborationen von Wissenschaftler:innen geknüpft und bei Publikationen und Forschungsergebnissen zeigen Studien, dass diese überwiegend das Resultat von mehreren Personen sind. Dieses Spannungsfeld zwischen Zusammenarbeit und Wettbewerb wird verstärkt durch die fehlenden Möglichkeiten für den wissenschaftlichen Nachwuchs in der Wissenschaft zu bleiben. Auch wenn die Bundesregierung auf diese Herausforderungen reagiert, muss der Einzelne seinen Weg zwischen Zusammenarbeit und Konkurrenz finden.
Zielsetzung dieser Arbeit ist es, nachfolgende Forschungsfragen zu beantworten:
1. Wie können naturwissenschaftliche Forschungsteams in AUF charakterisiert werden?
2. Wie agiert die einzelne Forscherin/ der einzelne Forscher im Spannungsfeld zwischen Kooperation und Wettbewerb?
3. Welche Potentiale und Hemmnisse lassen sich auf Individual-, Team- und Umweltebene für eine erfolgreiche Arbeit von Forschungsteams in AUF ausmachen?
Um die Forschungsfragen beantworten zu können, wurde eine empirische Untersuchung im Mixed Method Design, bestehend aus einer deutschlandweiten Onlinebefragung von 574 Naturwissenschaftler:innen in AUF und qualitativen Interviews mit 122 Teammitgliedern aus 20 naturwissenschaftlichen Forschungsteams in AUF, durchgeführt.
Die Ergebnisse zeigen, dass die Teams eher als Arbeitsgruppen bezeichnet werden können, da v.a. in der Grundlagenforschung kein gemeinsames Ziel als vielmehr ein gemeinsamer inhaltlicher Rahmen vorliegt, in dem die Forschenden ihre individuellen Ziele verfolgen. Die Arbeit im Team wird überwiegend als positiv und kooperativ beschrieben und ist v.a. durch gegenseitige Unterstützung bei Problemen und weniger durch einen thematisch wissenschaftlichen Erkenntnisprozess geprägt. Dieser findet vielmehr in Form kleiner Untergruppen innerhalb der Arbeitsgruppe und vor allem in enger Abstimmung mit der Teamleitung (TL) statt. Als wettbewerbsverschärfend werden vor allem organisationale Rahmenbedingungen, wie Befristungen und der Flaschenhals, thematisiert.
Die TL nimmt die zentrale Rolle im Team ein, trägt die wissenschaftliche, finanzielle und personelle Verantwortung und muss den Forderungen der Organisation gerecht werden. Promovierende konzentrieren sich fast ausschließlich auf ihre Qualifizierungsarbeit. Bei Postdocs ist ein Spannungsfeld zu erkennen, da sie eigene Projekte und Ziele verfolgen, die neben den Anforderungen der TL bestehen. Die Gatekeeperfunktion der TL wird gestärkt durch ihre Rolle bei der Weitergabe von karriererelevanten Informationen im Team, z.B. bei anstehenden Konferenzen. Sie hat die wichtigen Kontakte, sorgt für die Vernetzung des Teams und ist für die Netzwerkpflege zuständig. Der wissenschaftliche Nachwuchs verlässt sich bei seinen Aufgaben und den karriererelevanten Faktoren sehr auf ihre Unterstützung. Nicht-wissenschaftliche Mitarbeitende gilt es stärker zu berücksichtigen, dies sowohl in ihrer Funktion in den Teams als auch in der Gesamtorganisation. Sie sind die zentralen Ansprechpersonen des wissenschaftlichen Personals und sorgen für eine Kontinuität bei der Wissensspeicherung und -weitergabe. Für die Organisationen gilt es, unterstützende Rahmen-, Arbeits- und Aufgabenbedingungen für die TL zu schaffen und den wissenschaftlichen Nachwuchs bei einer frühzeitigen Verantwortung für wissenschaftliche und karriererelevante Aufgaben zu unterstützen. Dafür bedarf es verbesserter Personalentwicklungskonzepte und -angebote. Darüber hinaus gilt es, Kooperationsmöglichkeiten innerhalb der Einrichtung und zwischen den Gruppen zu schaffen, z.B. durch offene Räume und Netzwerkmöglichkeiten, und innovative Arbeitsumgebungen zu fördern, um neue Formen einer innovationsfreundlichen Wissenschaftskultur zu etablieren.
Die vorliegende Studie beschäftigt sich mit der Planung und Durchführung des Lernprozesses von Schauspielern, wobei das Hauptaugenmerk auf dem Einsatz von Lernstrategien liegt. Es geht darum, welcher Strategien sich professionell Lernende bedienen, um die für die Berufsausübung erforderliche Textsicherheit zu erlangen, nicht um die Optimierung des Lernerfolges.
Die Literaturrecherche machte deutlich, dass aktuelle Studien zum Lernen von Erwachsenen vor allem im berufsspezifischen Kontext angesiedelt sind und sich auf den Erwerb von Kompetenzen, Problemlösestrategien und gesellschaftliche Teilhabe beziehen. Dem Lernen von Schauspielern liegt aber keine Absicht einer Verhaltensänderung oder eines konkreten Wissenszuwachses zugrunde.
Für Schauspieler ist der Auftritt Bestandteil ihrer Berufskultur. Angesichts der Tatsache, dass präzisem Faktenwissen als Grundlage für kompetentes, überzeugendes Präsentieren entscheidende Bedeutung zukommt, sind die Ergebnisse der Studie auch für Berufsgruppen relevant, die öffentlich auftreten müssen, wie z. B. für Priester, Juristen und Lehrende. Das gilt ebenso für Schüler und Studenten, die Referate halten und/oder Arbeiten präsentieren müssen.
Für die empirische Untersuchung werden zwölf renommierte Schauspieler mittels problemzentriertem Interview befragt, anschließend wird eine qualitative Inhaltsanalyse durchgeführt.
In der Auswertung der Daten kann ein deutlicher Zusammenhang zwischen Körper und Sprechpraxis nachgewiesen werden. Ebenso ergibt die Analyse, wie wichtig Bewegung für den Lernprozess ist. Es können Ergebnisse in Bezug auf kognitive, metakognitive und ressourcenorientierte Strategien generiert werden, wobei der Lernumgebung und dem Lernen mit Kollegen entscheidende Bedeutung zukommt.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
Mit 52 Texten versammelt diese Dokumentation sämtliche uns heute bekannten Rezensionen zu Theodor Fontanes „Der Krieg gegen Frankreich 1870–1871“, das in zwei Bänden, bestehend aus insgesamt vier Halbbänden, zwischen März 1873 und September 1876 im Verlag der Königlichen Geheimen Ober-Hofbuchdruckerei (R. v. Decker) erschien. Der Text der Rezensionen wird jeweils nach dem Erstdruck in Zeitungen oder Zeitschriften zeichengetreu konstituiert. Damit wird der Forschung zu Fontanes Darstellung über den Deutsch-Französischen Krieg erstmals eine wichtige rezeptionsgeschichtliche Materialgruppe als Ausgabe zur Verfügung gestellt.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
Das Schulfach Geographie war in der DDR eines der Fächer, das sehr stark mit politischen Themen im Sinne des Marxismus-Leninismus bestückt war. Ein anderer Aspekt sind die sozialistischen Erziehungsziele, die in der Schulbildung der DDR hoch im Kurs standen. Im Fokus stand diesbezüglich die Erziehung der Kinder zu sozialistischen Persönlichkeiten. Die Arbeit versucht einen klaren Blick auf diesen Umstand zu werfen, um zu erfahren, was da von den Lehrkräften gefordert wurde und wie es in der Schule umzusetzen war.
Durch den Fall der Mauer war natürlich auch eine Umstrukturierung des Bildungssystems im Osten unausweichlich. Hier will die Arbeit Einblicke geben, wie die Geographielehrkräfte diese Transformation mitgetragen und umgesetzt haben. Welche Wesenszüge aus der Sozialisierung in der DDR haben sich bei der Gestaltung des Unterrichtes und dessen Ausrichtung auf die neuen Erziehungsziele erhalten?
Hierzu wurden Geographielehrkräfte befragt, die sowohl in der DDR als auch im geeinten Deutschland unterrichtet haben. Die Fragen bezogen sich in erster Linie auf die Art und Weise des Unterrichtens vor, während und nach der Wende und der daraus entstandenen Systemtransformation.
Die Befragungen kommen zu dem Ergebnis, dass sich der Geographieunterricht in der DDR thematisch von dem in der BRD nicht sonderlich unterschied. Von daher bedurfte es keiner umfangreichen inhaltlichen Veränderung des Geographieunterrichts. Schon zu DDR-Zeiten wurden durch die Lehrkräfte offenbar eigenmächtig ideologiefreie physisch-geographische Themen oft ausgedehnt, um die Ideologie des Faches zu reduzieren. So fiel den meisten eine Anpassung ihres Unterrichts an das westdeutsche System relativ leicht. Die humanistisch geprägte Werteerziehung des DDR-Bildungssystems wurde unter Ausklammerung des sozialistischen Aspektes ebenso fortgeführt, da es auch hier viele Parallelen zum westdeutschen System gegeben hat. Deutlich wird eine Charakterisierung des Faches als Naturwissenschaft von Seiten der ostdeutschen Lehrkräfte, obwohl das Fach an den Schulen den Gesellschaftswissenschaften zugeordnet wird und auch in der DDR eine starke wirtschaftsgeographische Ausrichtung hatte.
Von der Verantwortung sozialistische Persönlichkeiten zu erziehen, wurden die Lehrkräfte mit dem Ende der DDR entbunden und die in dieser Arbeit aufgeführten Interviewauszüge lassen keinen Zweifel daran, dass es dem Großteil der Befragten darum nicht leidtat, sie sich aber bis heute an der Werteorientierung aus DDR-Zeiten orientieren.
Es handelt sich hier um eine Replik auf Frank Holls Artikel „La cooperación inolvidable de Aimé Bonpland y Alexander von Humboldt“, erschienen in der argentinischen Zeitschrift Bonplandia Volumen 29 Nr. 2 (2020). Es wird versucht, die von Holl gemachten schweren Vorwürfe gegenüber Avé-Lallemant, wie z. B. eine „tendenziöse Geschichtsschreibung“ zu praktizieren und Bonpland über Jahrhunderte hinweg schwer geschadet zu haben, nicht nur zu entschärfen, sondern auch in seinem eigenen Artikel als widersprüchlich aufzudecken. Die Ausführungen basieren vor allem auf den 7 Seiten, die Avé-Lallemant Bonpland und seinem Besuch bei demselben auf seiner Estancia Santa Ana 14 Tage vor seinem Tod in seinem zweibändigen Reisewerk „Reise durch Süd-Brasilien“ (1859) widmet, und einem kurzen Nachruf im 2. Band nach dessen Ableben im Jahre 1858.
Electrical muscle stimulation (EMS) is an increasingly popular training method and has become the focus of research in recent years. New EMS devices offer a wide range of mobile applications for whole-body EMS (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. The present study aimed to determine the differences in exercise intensity between WB-EMS-superimposed and conventional walking (EMS-CW), and CON and WB-EMS-superimposed Nordic walking (WB-EMS-NW) during a treadmill test. Eleven participants (52.0 ± years; 85.9 ± 7.4 kg, 182 ± 6 cm, BMI 25.9 ± 2.2 kg/m2) performed a 10 min treadmill test at a given velocity (6.5 km/h) in four different test situations, walking (W) and Nordic walking (NW) in both conventional and WB-EMS superimposed. Oxygen uptake in absolute (VO2) and relative to body weight (rel. VO2), lactate, and the rate of perceived exertion (RPE) were measured before and after the test. WB-EMS intensity was adjusted individually according to the feedback of the participant. The descriptive statistics were given in mean ± SD. For the statistical analyses, one-factorial ANOVA for repeated measures and two-factorial ANOVA [factors include EMS, W/NW, and factor combination (EMS*W/NW)] were performed (α = 0.05). Significant effects were found for EMS and W/NW factors for the outcome variables VO2 (EMS: p = 0.006, r = 0.736; W/NW: p < 0.001, r = 0.870), relative VO2 (EMS: p < 0.001, r = 0.850; W/NW: p < 0.001, r = 0.937), and lactate (EMS: p = 0.003, r = 0.771; w/NW: p = 0.003, r = 0.764) and both the factors produced higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS*W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values (p = 0.035, r = 0.613), RPE differences for W/NW and EMS*W/NW were not significant. The current study results indicate that WB-EMS influences the parameters of exercise intensity. The impact on exercise intensity and the clinical relevance of WB-EMS-superimposed walking (WB-EMS-W) exercise is questionable because of the marginal differences in the outcome variables.
This study investigates the relationship between teacher quality and teachers’ engagement in professional development (PD) activities using data on 229 German secondary school mathematics teachers. We assessed different aspects of teacher quality (e.g. professional knowledge, instructional quality) using a variety of measures, including standardised tests of teachers’ content knowledge, to determine what characteristics are associated with high participation in PD. The results show that teachers with higher scores for teacher quality variables take part in more content-focused PD than teachers with lower scores for these variables. This suggests that teacher learning may be subject to a Matthew effect, whereby more proficient teachers benefit more from PD than less proficient teachers.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
Clustering in education is important in identifying groups of objects in order to find linked patterns of correlations in educational datasets. As such, MOOCs provide a rich source of educational datasets which enable a wide selection of options to carry out clustering and an opportunity for cohort analyses. In this experience paper, five research studies on clustering in MOOCs are reviewed, drawing out several reasonings, methods, and students’ clusters that reflect certain kinds of learning behaviours. The collection of the varied clusters shows that each study identifies and defines clusters according to distinctive engagement patterns. Implications and a summary are provided at the end of the paper.
First come, first served: Critical choices between alternative actions are often made based on events external to an organization, and reacting promptly to their occurrence can be a major advantage over the competition. In Business Process Management (BPM), such deferred choices can be expressed in process models, and they are an important aspect of process engines. Blockchain-based process execution approaches are no exception to this, but are severely limited by the inherent properties of the platform: The isolated environment prevents direct access to external entities and data, and the non-continual runtime based entirely on atomic transactions impedes the monitoring and detection of events. In this paper we provide an in-depth examination of the semantics of deferred choice, and transfer them to environments such as the blockchain. We introduce and compare several oracle architectures able to satisfy certain requirements, and show that they can be implemented using state-of-the-art blockchain technology.
The goal of this paper is to study the demand factors driving enrollment in massive open online courses. Using course level data from a French MOOC platform, we study the course, teacher and institution related characteristics that influence the enrollment decision of students, in a setting where enrollment is open to all students without administrative barriers. Coverage from social and traditional media done around the course is a key driver. In addition, the language of instruction and the (estimated) amount of work needed to complete the course also have a significant impact. The data also suggests that the presence of same-side externalities is limited. Finally, preferences of national and of international students tend to differ on several dimensions.
Das Hauptziel der Bachelorarbeit stellt eine theoretische Auseinandersetzung mit dem Thema Wassergewöhnung im eigenen Zuhause dar. Ausgehend von dieser Ausführung erstellt die Autorin als Theorie-Praxis-Transfer eine Handreichung für Erziehungsberechtigte mit den relevantesten Informationen ihrer Qualifikationsarbeit in komprimierter Form. Damit die Erziehungsberechtigten ihren Kindern proaktiv zur Seite stehen können, soll die Handreichung adressat*innengerecht und prägnant sein, ohne den Erziehungsberechtigten essenzielle Details vorzuenthalten. Die Erziehungsberechtigten erhalten eine Handreichung, welche die bedeutendsten Informationen rund um die Wassergewöhnung zu Hause enthält. Sie erfahren unter anderem etwas über die höchstmögliche Aufenthaltsdauer der Kinder im Wasser und die optimale Temperatur des Badewassers. Außerdem erhalten sie wichtige Informationen rund um die Körperreaktionen, welche durch oder im Wasser auftreten können. Das sind bspw. der Lidschlussreflex oder der Kältereiz. Sie werden über essenzielle Sicherheitsaspekte informiert und erhalten eine kompakte Darstellung über Verhaltensregeln, den sogenannten do’s and dont‘s. Die Übungen/Spiele werden nach den aktuellen Vorgaben der DGUV (2019) für die Inhalte der Wassergewöhnung ausgewählt und nach den heimischen Voraussetzungen strukturiert sein. In der Handreichung werden zudem auch Übungen/Spiele zu finden sein, bei welchen keine Eigenschaften oder Wirkungen des Wassers kennengelernt werden. Atem- und Tauchübungen werden in der Handreichung ebenso beschrieben. Die Angst vor dem Wasser stellt, sobald sie sich manifestiert, bekanntlich das größte Hindernis der Nichtschwimmer*innen dar (DGUV, 2019). Darum möchte die Autorin mit der Aufklärung über diese Angst in ihrer Qualifikationsarbeit und der Handreichung bewirken, dass die Erziehungsberechtigten in der Lage sind, den Kindern das Angstgefühl gegenüber dem Wasser zu nehmen oder ihre Angstfreiheit beizubehalten und um daran anschließend den Kindern Freude an der Bewegung im Wasser zu ermöglichen. „Je mehr Freude die Kinder im Kleinkindalter am Baden haben, je weniger Angst sie mit dem Medium verbinden, umso schneller erlernen sie später das Schwimmen“ (DGUV, 2016, S. 6).
Die theoretischen Grundlagen der Handreichungen stellen die zentralen Aspekte und Ziele der Wassergewöhnung dar. Diese werden der, im Rahmen Schule, bedeutsamen Publikation der Deutschen Gesetzlichen Unfallversicherung aus dem Jahr 2019 entnommen. Hierbei handelt es sich um die Wahrnehmung der spezifischen Voraussetzungen des Wassers sowie deren Annäherung und Gewöhnung. Die Kinder erfahren die Eigenschaften Dichte, Druck und Temperatur des Elements und den Einfluss des Wassers auf den Körper. Das sind Wasserwiderstand, Auftrieb und die Wasserkraft. So werden die Übungen, in denen die Kinder das Wasser kennenlernen, beziehungsweise zum ersten Mal intensiv in Berührung mit diesem kommen, zu Beginn erwähnt. Anschließend folgen Übungen, überwiegend in Spielformen, bei denen die Freude geweckt werden soll. Als letzte Phase folgen Übungen, bei welchen der spezifische Umgang mit dem Wassers vonnöten ist. Diese Struktur ist an den ersten drei Phasen nach Baumeisters (1984) Methodik zur Wassergewöhnung orientiert. So wird zudem das methodische Prinzip vom Einfachen zum Komplexen als theoretische Grundlage verwendet. Legahn (2007) beschreibt einige Lernmodelle, die je nach Alter und Entwicklungsstand bei der Wassergewöhnung angewendet werden können. In der Handreichung wird die Autorin auf diese zurückgreifen und passende Lerntechniken ausführen. Beispiele hierfür sind unter anderem das Lernen am Modell (Nachahmung von Personen, Tieren oder Puppen) oder das Aktive Lernen (ein spielerischer Bewegungsaufbau verbessert die Fertigkeiten). Die benötigten Materialien werden in der Handreichung unter der Überschrift der Übungen/Spiele ausgeführt und dienen als erste Information. Neben der Überschrift werden die möglichen Eigenschaften und Wirkungen des Wassers, welche in dieser spezifischen Übung kennengelernt werden, benannt. Das sind beispielsweise Druck und Auftrieb für Wasserdruck und Wasserauftrieb. Darunter wird die jeweilige Übung beschrieben. Als Visualisierung erstellt die Autorin selbstständig gezeichnete Bilder. Unterhalb dieser Bilder befindet sich oft auch eine passende Spielvariante, um mit dieser Übung noch zusätzlich Freude zu wecken. Ebenso werden auch mehrmals passende Übungsformen oder Tipps erwähnt.
Was ist HipHop?
(2021)
Es handelt sich bei der vorliegenden Dissertation um eine investigative Forschungsarbeit, die sich mit dem dynamisch wandelnden HipHop-Phänomen befasst. Der Autor erläutert hierbei die anhaltende Attraktivität des kulturellen Phänomens HipHop und versucht die Tatsache der stetigen Reproduzierbarkeit des HipHops genauer zu erklären. Daher beginnt er mit einer historischen Diskursanalyse der HipHop-Kultur. Er analysiert hierfür die Formen, die Protagonisten und die Diskurse des HipHops, um diesen besser verstehen zu können. Durch die Herausarbeitung der genuinen Eigenschaft der Mehrfachkodierbarkeit des HipHops werden gängige Erklärungsmuster aus Wissenschaft und Medien relativiert und kritisiert. Der Autor kombiniert in seiner Studie kultur- und erziehungswissenschaftliche Literatur mit diversen aktuellen und historischen Darstellungen und Bildern. Es werden vor allem bildbasierte Selbstinszenierungen von HipHoppern und Selbstzeugnisse aus narrativen Interviews, die er selbst mit verschiedenen HipHoppern in Deutschland geführt hat, ausgewertet. Neben den narrativen Interviews dient vor allem die Bildinterpretation nach Bohnsack als Quelle zur Bildung der These der Mehrfachkodierbarkeit. Hierbei werden zwei Bilder der HipHopper Lady Bitch Ray und Kollegah nach Bohnsack (2014) interpretiert und gezeigt wie HipHop neben der lyrischen und der klanglichen Komponente auch visuell inszeniert und produziert wird. Hieraus wird geschlussfolgert, dass es im HipHop möglich ist konträre Sichtweisen bei gleichzeitiger Anwendung von typischen Kulturpraktiken wie zum Beispiel dem Boasting darzustellen und zu vermitteln. Die stetige Offenheit des HipHops wird durch Praktiken wie dem Sampling oder dem Battle deutlich und der Autor erklärt, dass durch diese Techniken die generative Eigenschaft der Mehrfachkodierbarkeit hergestellt wird. Damit vertritt er eine Art Baukasten-Theorie, die besagt, dass sich prinzipiell jeder aus dem Baukasten HipHop, je nach Vorliebe, Interesse und Affinität, bedienen kann. Durch die Vielfalt an Meinungen zu HipHop, die der Autor durch die Kodierung der geführten narrativen Interviews erhält, wird diese These verdeutlicht und es wird klar, dass es sich bei HipHop um mehr als nur eine Mode handelt. HipHop besitzt die prinzipielle Möglichkeit durch die Offenheit, die er in sich trägt, sich stetig neu zu wandeln und damit an Beliebtheit und Popularität zuzunehmen. Die vorliegende Arbeit erweitert damit die immer größer werdende Forschung in den HipHop-Studies und setzt wichtige Akzente um weiter zu forschen und HipHop besser verständlich zu machen.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
Der vorliegende Artikel nimmt den aktuellen Stand zur Integration von digitalen Medien und insbesondere Videospielen in der Lehrer*innenausbildung in den Fokus. Dabei soll in einem Dreischritt vorgegangen werden: Zunächst wird ein allgemeiner Blick auf den aktuellen Stand der digitalen Ausstattung an Schulen und Hochschulen vorgenommen. Im Anschluss wird auf die formalen Vorgaben der Modulprüfungsordnungen eingegangen und folgend das didaktisch perspektivierte Konzept von Matthis Kepser (2012) vorgestellt. Darauf aufbauend wird ein Konzept für das Seminar „Narrative Computerspiele im Deutschunterricht“ vorgestellt, welches den Einsatz des RPG Maker vorsieht. Nach diesen theoretischen Vorüberlegungen wird dargestellt, welche Ergebnisse im Rahmen des Seminars, welches von 2016 bis 2019 jährlich an der Universität Kassel durchgeführt wurde, entstanden sind und an einem Beispiel verdeutlicht, welche Chancen und Herausforderungen ein solches Seminarformat sowohl für die Lehramtsausbildung als auch für den Deutschunterricht mit sich bringt.
Rheology describes the flow of matter under the influence of stress, and - related to solids- it investigates how solids subjected to stresses deform. As the deformation of the Earth’s outer layers, the lithosphere and the crust, is a major focus of rheological studies, rheology in the geosciences describes how strain evolves in rocks of variable composition and temperature under tectonic stresses. It is here where deformation processes shape the form of ocean basins and mountain belts that ultimately result from the complex interplay between lithospheric plate motion and the susceptibility of rocks to the influence of plate-tectonic forces. A rigorous study of the strength of the lithosphere and deformation phenomena thus requires in-depth studies of the rheological characteristics of the involved materials and the temporal framework of deformation processes.
This dissertation aims at analyzing the influence of the physical configuration of the lithosphere on the present-day thermal field and the overall rheological characteristics of the lithosphere to better understand variable expressions in the formation of passive continental margins and the behavior of strike-slip fault zones. The main methodological approach chosen is to estimate the present-day thermal field and the strength of the lithosphere by 3-D numerical modeling. The distribution of rock properties is provided by 3-D structural models, which are used as the basis for the thermal and rheological modeling. The structural models are based on geophysical and geological data integration, additionally constrained by 3-D density modeling. More specifically, to decipher the thermal and rheological characteristics of the lithosphere in both oceanic and continental domains, sedimentary basins in the Sea of Marmara (continental transform setting), the SW African passive margin (old oceanic crust), and the Norwegian passive margin (young oceanic crust) were selected for this study.
The Sea of Marmara, in northwestern Turkey, is located where the dextral North Anatolian Fault zone (NAFZ) accommodates the westward escape of the Anatolian Plate toward the Aegean. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the lateral crustal heterogeneities is presented for the first time in this study. Here, I use different gravity datasets and the general non-uniqueness in potential field modeling, to propose three possible end-member scenarios of crustal configuration. The models suggest that pronounced gravitational anomalies in the basin originate from significant density heterogeneities within the crust. The rheological modeling reveals that associated variations in lithospheric strength control the mechanical segmentation of the NAFZ. Importantly, a strong crust that is mechanically coupled to the upper mantle spatially correlates with aseismic patches where the fault bends and changes its strike in response to the presence of high-density lower crustal bodies. Between the bends, mechanically weaker crustal domains that are decoupled from the mantle are characterized by creep.
For the passive margins of SW Africa and Norway, two previously published 3-D conductive and lithospheric-scale thermal models were analyzed. These 3-D models differentiate various sedimentary, crustal, and mantle units and integrate different geophysical data, such as seismic observations and the gravity field. Here, the rheological modeling suggests that the present-day lithospheric strength across the oceanic domain is ultimately affected by the age and past thermal and tectonic processes as well as the depth of the thermal lithosphere-asthenosphere boundary, while the configuration of the crystalline crust dominantly controls the rheological behavior of the lithosphere beneath the continental domains of both passive margins.
The thermal and rheological models show that the variations of lithospheric strength are fundamentally influenced by the temperature distribution within the lithosphere. Moreover, as the composition of the lithosphere significantly influences the present-day thermal field, it therefore also affects the rheological characteristics of the lithosphere. Overall my studies add to our understanding of regional tectonic deformation processes and the long-term behavior of sedimentary basins; they confirm other analyses that have pointed out that crustal heterogeneities in the continents result in diverse lithospheric thermal characteristics, which in turn results in higher complexity and variations of rheological behavior compared to oceanic domains with a thinner, more homogeneous crust.
Plastic pollution is an increasing environmental problem, but a comprehensive understanding of its effect in the environment is still missing. The wide variety of size, shape, and polymer composition of plastics impedes an adequate risk assessment. We investigated the effect of differently sized polystyrene beads (1-, 3-, 6-µm; PS) and polyamide fragments (5–25 µm, PA) and non-plastics items such as silica beads (3-µm, SiO2) on the population growth, reproduction (egg ratio), and survival of two common aquatic micro invertebrates: the rotifer species Brachionus calyciflorus and Brachionus fernandoi. The MPs were combined with food quantity, limiting and saturating food concentration, and with food of different quality. We found variable fitness responses with a significant effect of 3-µm PS on the population growth rate in both rotifer species with respect to food quantity. An interaction between the food quality and the MPs treatments was found in the reproduction of B. calyciflorus. PA and SiO2 beads had no effect on fitness response. This study provides further evidence of the indirect effect of MPs in planktonic rotifers and the importance of testing different environmental conditions that could influence the effect of MPs.
The sharing economy gains momentum and develops a major economic impact on traditional markets and firms. However, only rudimentary theoretical and empirical insights exist on how sharing networks, i.e., focal firms, shared goods providers and customers, create and capture value in their sharing-based business models. We conduct a qualitative study to find key differences in sharing-based business models that are decisive for their value configurations. Our results show that (1) customization versus standardization of shared goods and (2) the centralization versus particularization of property rights over the shared goods are two important dimensions to distinguish value configurations. A second, quantitative study confirms the visibility and relevance of these dimensions to customers. We discuss strategic options for focal firms to design value configurations regarding the two dimensions to optimize value creation and value capture in sharing networks. Firms can use this two-dimensional search grid to explore untapped opportunities in the sharing economy.
Background
Depression is one of the key factors contributing to difficulties in one’s ability to work, and serves as one of the major reasons why employees apply for psychotherapy and receive insurance subsidization of treatments. Hence, an increasing and growing number of studies rely on workability assessment scales as their primary outcome measure. The Work and Social Assessment Scale (WSAS) has been documented as one of the most psychometrically reliable and valid tools especially developed to assess workability and social functioning in patients with mental health problems. Yet, the application of the WSAS in Germany has been limited due to the paucity of a valid questionnaire in the German language. The objective of the present study was to translate the WSAS, as a brief and easy administrable tool into German and test its psychometric properties in a sample of adults with depression.
Methods
Two hundred seventy-seven patients (M = 48.3 years, SD = 11.1) with mild to moderately severe depression were recruited. A multistep translation from English into the German language was performed and the factorial validity, criterion validity, convergent validity, discriminant validity, internal consistency, and floor and ceiling effects were examined.
Results
The confirmatory factor analysis results confirmed the one-factor structure of the WSAS. Significant correlations with the WHODAS 2–0 questionnaire, a measure of functionality, demonstrated good convergent validity. Significant correlations with depression and quality of life demonstrated good criterion validity. The WSAS also demonstrated strong internal consistency (α = .89), and the absence of floor and ceiling effects indicated good sensitivity of the instrument.
Conclusions
The results of the present study demonstrated that the German version of the WSAS has good psychometric properties comparable to other international versions of this scale. The findings recommend a global assessment of psychosocial functioning with the sum score of the WSAS.
MOOCs have been produced using a variety of instructional design approaches and frameworks. This paper presents experiences from the instructional approach based on the ADDIE model applied to designing and producing MOOCs in the Erasmus+ strategic partnership on Open Badge Ecosystem for Research Data Management (OBERRED). Specifically, this paper describes the case study of the production of the MOOC “Open Badges for Open Science”, delivered on the European MOOC platform EMMA. The key goal of this MOOC is to help learners develop a capacity to use Open Badges in the field of Research Data Management (RDM). To produce the MOOC, the ADDIE model was applied as a generic instructional design model and a systematic approach to the design and development following the five design phases: Analysis, Design, Development, Implementation, Evaluation. This paper outlines the MOOC production including methods, templates and tools used in this process including the interactive micro-content created with H5P in form of Open Educational Resources and digital credentials created with Open Badges and issued to MOOC participants upon successful completion of MOOC levels. The paper also outlines the results from qualitative evaluation, which applied the cognitive walkthrough methodology to elicit user requirements. The paper ends with conclusions about pros and cons of using the ADDIE model in MOOC production and formulates recommendations for further work in this area.
Background
Artificial intelligence (AI) is one of the most promising areas in medicine with many possibilities for improving health and wellness. Already today, diagnostic decision support systems may help patients to estimate the severity of their complaints. This fictional case study aimed to test the diagnostic potential of an AI algorithm for common sports injuries and pathologies.
Methods
Based on a literature review and clinical expert experience, five fictional “common” cases of acute, and subacute injuries or chronic sport-related pathologies were created: Concussion, ankle sprain, muscle pain, chronic knee instability (after ACL rupture) and tennis elbow. The symptoms of these cases were entered into a freely available chatbot-guided AI app and its diagnoses were compared to the pre-defined injuries and pathologies.
Results
A mean of 25–36 questions were asked by the app per patient, with optional explanations of certain questions or illustrative photos on demand. It was stressed, that the symptom analysis would not replace a doctor’s consultation. A 23-yr-old male patient case with a mild concussion was correctly diagnosed. An ankle sprain of a 27-yr-old female without ligament or bony lesions was also detected and an ER visit was suggested. Muscle pain in the thigh of a 19-yr-old male was correctly diagnosed. In the case of a 26-yr-old male with chronic ACL instability, the algorithm did not sufficiently cover the chronic aspect of the pathology, but the given recommendation of seeing a doctor would have helped the patient. Finally, the condition of the chronic epicondylitis in a 41-yr-old male was correctly detected.
Conclusions
All chosen injuries and pathologies were either correctly diagnosed or at least tagged with the right advice of when it is urgent for seeking a medical specialist. However, the quality of AI-based results could presumably depend on the data-driven experience of these programs as well as on the understanding of their users. Further studies should compare existing AI programs and their diagnostic accuracy for medical injuries and pathologies.
Die vorliegende Dissertation behandelt drei thematische Schwerpunkte. Im Ergebnisteil steht die chemische Synthese von sogenannten (1,7)-Naphthalenophanen im Vordergrund, die zur Substanzklasse von Cyclophanen gehören. Während zahlreiche Synthesemethoden Strategien zum Aufbau von Ringsystemen (wie z. B. von Naphthalenophanen) verfolgen, die Teil einer bereits existierenden aromatischen Struktur der Ausgangsverbindung sind, nutzen nur wenige Ansätze Reaktionen, die einen Ringschluss zum gewünschten Produkt erst im Zuge der Synthese etablieren. Eine Benzanellierung, die eine besondere Aufmerksamkeit im Arbeitskreis erfahren hat, ist die Dehydro-DIELS-ALDER-Reaktion (DDA-Reaktion). Im Rahmen dieser Arbeit konnte gezeigt werden, dass zwölf ausgewählte (1,7)-Naphthalenophane, die teilweise ringgespannt und makrozyklisch aufgebaut waren, mithilfe einer photochemischen Variante der DDA-Reaktion (PDDA-Reaktion) zugänglich gemacht werden können. Die Versuche, auf thermischem Wege (TDDA-Reaktion) (1,7)-Naphthalenophane herzustellen, misslangen. Die außergewöhnliche Reaktivität der Photoreaktanten konnte mithilfe quantenchemischer Berechnungen durch eine gefaltete Grundzustandsgeometrie erklärt werden. Darüber hinaus wurden Ringspannungen und strukturelle Spannungsindikatoren der relevanten Photoprodukte ermittelt und Trends in Abhängigkeit der Linkerlänge in den NMR-Spektren der Zielverbindungen ermittelt sowie diskutiert. Zudem zeigte eine Variation am Chromophor (Acyl-, Carbonsäure- und Carbonsäureester) der Photoreaktanten bei der Bestrahlung in Dichlormethan eine vergleichbare Photokinetik und -reaktivität. Der zweite Abschnitt dieser Dissertation ist dem Design und der Entwicklung zweier Photoreaktoren für UV-Anwendungen im kontinuierlichen Durchfluss gewidmet, da photochemische Transformationen bekanntermaßen in ihrer Skalierbarkeit limitiert sind. Im ersten Prototyp konnten mittels effizienter Parallelschaltung mit bis zu drei UV-Lampen (𝜆𝜆 = 254, 310 und 355 nm) Produktmaterialmengen von bis zu n = 188 mmol anhand eines ausgewählten Fallbeispiels erreicht werden. Im konstruktionstechnisch stark vereinfachten zweiten Photoreaktor wurden alle quarzhaltigen Elemente gegen günstigeres PLEXIGLAS® ersetzt. Das Resultat waren identische Raum-Zeit-Ausbeuten in Bezug auf das zuvor gewählte Synthesebeispiel. Demnach bietet die UV-Photochemie im kontinuierlichen Durchfluss Vorteile gegenüber der traditionellen Bestrahlung im Tauchreaktor. Hinsichtlich Reaktionszeit, Produktausbeuten und Lösemittelverbrauch ist sie synthetisch weit überlegen. Im letzten Abschnitt der Arbeit wurden diese Erkenntnisse genutzt, um biomedizinisch und pharmakologisch vielversprechende 1-Arylnaphthalen-Lignane mittels einer intramolekularen PDDA-Reaktion (IMPDDA-Reaktion) als Schlüsselschritt herzustellen. Hierzu wurden drei Konzepte erarbeitet und in der Totalsynthese von drei ausgewählten Zielstrukturen auf Basis des 1-Arylnaphthalengrundgerüsts realisiert.
Eine Zunahme der allgemeinen Temperatur auf Grund des Klimawandels und die damit einhergehende Zunahme von Hitzewellen führten dazu, dass das Landesamt für Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) einen Leitfaden für den Schutz der positiven Klimafunktion urbaner Böden herausgab. Darauf aufbauend wurde auf regionaler Ebene für die Stadt Düsseldorf die Kühlleistung der urbanen Böden quantifiziert, um besonders schutzwürdige Bereiche zu identifizieren. Im Rahmen des Projektes ExTrass sollte nun die Kühlleistung urbaner Böden innerhalb Remscheids quantifiziert werden, jedoch auf Basis von frei zugänglichen Daten. Eine solche Datengrundlage schließt eine Modellierung des Bodenwasserhaushaltes, welches die Grundlage der Quantifizierung in Düsseldorf war, für Remscheid aus. Jedoch bietet der vorgestellte Ansatz die Möglichkeit, eine solche Untersuchung auch in anderen Gemeinden innerhalb Deutschlands mit relativ wenig Aufwand durchzuführen.
Die Kühlleistung der Böden wurde über die nutzbare Feldkapazität abgeschätzt, welche das Wasserspeichervolumen der obersten durchwurzelten Bodenzone angibt. Es ist der Bodenwasserspeicher, der Wasser für die Evapotranspiration zur Verfügung stellt und damit maßgeblich die Kühlleistung eines Bodens definiert, d.h. durch direkte Evaporation des Bodenwassers sowie durch die Transpiration von Wasser durch Pflanzen. In die Erstellung der Karte sind eingegangen: (a) die Bodenkarte Nordrhein-Westfalens (BK50), um die nutzbare Feldkapazität (nFK) je Fläche zu bestimmen; (b) der Landnutzungsdatensatz UrbanAtlas 2012, in Verbindung mit einer Literaturrecherche, um den Einfluss der Landnutzung auf die Werte der nFK, insbesondere im Hinblick auf Versiegelung und Verdichtung herzuleiten; und (c) OpenStreetMap (OSM), um den Anteil der versiegelten Flächen genauer zu bestimmen, als dies auf Basis des UrbanAtlas möglich gewesen wäre.
Es hat sich gezeigt, dass dieser Ansatz geeignet ist, um die räumliche Verteilung der potenziellen Bodenkühlfunktion innerhalb einer Stadt zu untersuchen. Es ist zu beachten, dass der Einfluss des Grundwassers in Remscheid nicht berücksichtigt werden konnte. Denn es ist damit zu rechnen, dass die Grundwasserverhältnisse aufgrund der geologischen und topographischen Situation in Remscheid kleinräumig Variationen unterliegen und es somit
keinen durchgängigen und kartierten Aquifer gibt.
Kleingartenanlagen, Parks und Friedhöhe im innerstädtischen Bereich und allgemein die Landnutzungsklassen Wald und Grünland wurden als Flächen mit einem besonders hohem potenziellen Bodenkühlpotenzial identifiziert. Solche Flächen sind besonders schützenswert. Die Analyse der Speicherfüllstände der oberen Bodenzone, basierend auf der erstellten Karte der potenziellen Bodenkühlfunktion und der klimatischen Wasserbilanz, ergab, dass besonders innerstädtische Flächen, die einen kleinen Bodenwasserspeicher haben, in einem trockenen Jahr bereits früh im Sommer ihre Kühlfunktion verlieren und bei Hitzewellen somit eine verringerte positive Klimafunktion haben. Gestützt wird diese Aussage durch eine Auswertung des normalisierten differenzierten Vegetationsindex (NDVI), der genutzt wurde, um die Veränderung der Pflanzenvitalität vor und nach einer Hitzeperiode im Juni/Juli 2018 zu untersuchen.
Messungen mit Meteobikes, einer Vorrichtung, die dazu geeignet ist, während einer Radfahrt kontinuierlich die Temperatur zu messen, stützen die Erkenntnis, dass innerstädtische Grünflächen wie Parks eine positive Wirkung auf das urbane Mikroklima haben. Weiterhin zeigen diese Messungen, dass die Topographie innerhalb des Untersuchungsgebietes die Aufheizung einzelner Flächen und die Temperaturverteilung vermutlich mitbestimmt. Die hier vorgestellte Karte der potenziellen Kühlfunktion für Remscheid sollte als Ergänzung in die Klimafunktionskarte für Remscheid eingehen und den bestehenden Layer „flächenhafte Klimafunktion“, der nur die Landnutzung berücksichtigt, ersetzen.
The Big Five personality traits play a major role in student achievement. As such, there is consistent evidence that students that are more conscientious receive better teacher-assigned grades in secondary school. However, research often does not support the claim that students that are more conscientious similarly achieve higher scores in domain-specific standardized achievement tests. Based on the Invest-and-Accrue Model, we argue that conscientiousness explains to some extent why certain students receive better grades despite similar academic accomplishments (i.e., achieving similar scores in domain-specific standardized achievement tests). Therefore, the present study examines to what extent the relationship between student personality and teacher-assigned grades consists of direct as opposed to indirect associations (via subject-specific standardized test scores). We used a representative sample of 14,710 ninth-grade students to estimate these direct and indirect pathways in mathematics and German. Structural equation models showed that test scores explained between 8 and 11% of the variance in teacher-assigned grades in mathematics and German. The Big Five personality traits in students additionally explained between 8 and 10% of the variance in grades. Finally, the personality-grade relationship consisted of direct (0.02 | β| ≤ 0.27) and indirect associations via test scores (0.01 | β| ≤ 0.07). Conscientiousness explained discrepancies between teacher-assigned grades and students’ scores in domain-specific standardized tests to a greater extent than any of the other Big Five personality traits. Our findings suggest that students that are more conscientious may invest more effort to accomplish classroom goals, but fall short of mastery.
Universitat Politècnica de València’s Experience with EDX MOOC Initiatives During the Covid Lockdown
(2021)
In March 2020, when massive lockdowns started to be enforced around the world to contain the spread of the COVID-19 pandemic, edX launched two initiatives to help students around the world providing free certificates for its courses, RAP, for member institutions and OCE, for any accredited academic institution. In this paper we analyze how Universitat Poltècnica de València contributed with its courses to both initiatives, providing almost 14,000 free certificate codes in total, and how UPV used the RAP initiative as a customer, describing the mechanism used to distribute more than 22,000 codes for free certificates to more than 7,000 UPV community members, what led to the achievement of more than 5,000 free certificates. We also comment the results of a post initiative survey answered by 1,612 UPV members about 3,241 edX courses, in which they communicated a satisfaction of 4,69 over 5 with the initiative.
The Earth's electron radiation belts exhibit a two-zone structure, with the outer belt being highly dynamic due to the constant competition between a number of physical processes, including acceleration, loss, and transport. The flux of electrons in the outer belt can vary over several orders of magnitude, reaching levels that may disrupt satellite operations. Therefore, understanding the mechanisms that drive these variations is of high interest to the scientific community.
In particular, the important role played by loss mechanisms in controlling relativistic electron dynamics has become increasingly clear in recent years. It is now widely accepted that radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause, called magnetopause shadowing. Precipitation of electrons occurs due to pitch-angle scattering by resonant interaction with various types of waves, including whistler mode chorus, plasmaspheric hiss, and electromagnetic ion cyclotron waves. In addition, the compression of the magnetopause due to increases in solar wind dynamic pressure can substantially deplete electrons at high L shells where they find themselves in open drift paths, whereas electrons at low L shells can be lost through outward radial diffusion. Nevertheless, the role played by each physical process during electron flux dropouts still remains a fundamental puzzle.
Differentiation between these processes and quantification of their relative contributions to the evolution of radiation belt electrons requires high-resolution profiles of phase space density (PSD). However, such profiles of PSD are difficult to obtain due to restrictions of spacecraft observations to a single measurement in space and time, which is also compounded by the inaccuracy of instruments. Data assimilation techniques aim to blend incomplete and inaccurate spaceborne data with physics-based models in an optimal way. In the Earth's radiation belts, it is used to reconstruct the entire radial profile of electron PSD, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment.
In this study, sparse measurements from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 are assimilated into the three-dimensional Versatile Electron Radiation Belt (VERB-3D) diffusion model, by means of a split-operator Kalman filter over a four-year period from 01 October 2012 to 01 October 2016. In comparison to previous works, the 3D model accounts for more physical processes, namely mixed pitch angle-energy diffusion, scattering by EMIC waves, and magnetopause shadowing. It is shown how data assimilation, by means of the innovation vector (the residual between observations and model forecast), can be used to account for missing physics in the model. This method is used to identify the radial distances from the Earth and the geomagnetic conditions where the model is inconsistent with the measured PSD for different values of the adiabatic invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and this is interpreted as evidence of where and when additional source or loss processes are active.
Furthermore, two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons are investigated: EMIC wave-induced scattering and magnetopause shadowing. The innovation vector is inspected for values of the invariant mu ranging from 300 to 3000 MeV/G, and a statistical analysis is performed to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. The results of this work are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. EMIC wave scattering dominates loss at lower L shells and it may amount to between 10%/hr to 30%/hr of the maximum value of PSD over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
The results of this study are two-fold. Firstly, it demonstrates that the 3D data assimilative code provides a comprehensive picture of the radiation belts and is an important step toward performing reanalysis using observations from current and future missions. Secondly, it achieves a better understanding and provides critical clues of the dominant loss mechanisms responsible for the rapid dropouts of electrons at different locations over the outer radiation belt.
Digitale Forschungsdaten gewinnen zunehmend an Bedeutung und stellen neue Herausforderungen an wissenschaftliche Einrichtungen und ihre Forschenden. Der Begriff Forschungsdatenmanagement umfasst alle Aktivitäten, die mit der Aufbereitung, Speicherung, Archivierung und Veröffentlichung von Forschungsdaten verbunden sind. Da der Umgang mit Forschungsdaten generische, fachliche, rechtliche und technische Aspekte betrifft, erfordert es eine Begleitung der Forschenden durch ein umfangreiches Spektrum an Services, von Information und Beratung bis hin zu fachspezifischen Standards und IT-Infrastrukturen.
Im vorliegenden Bericht werden zunächst die Ausgangslage und die Begrifflichkeiten rund um Forschungsdatenmanagement geklärt und anschließend die wichtigsten nationalen und internationalen Strategien und Entwicklungen vorgestellt. Dabei bilden Richtlinien und Empfehlungen für Forschungsdaten(management) den Handlungsrahmen für alle Beteiligte hin zu einem nachhaltigen Forschungsdatenmanagement. Bundeslandinitiativen schaffen die Grundlage und unterstützen den Kulturwandel zu offenen Daten.
Eine Forschungsdaten-Strategie für Brandenburg muss die Bedeutung von digitalen Forschungsdaten als wissenschaftliches Gut in den Vordergrund stellen, indem dafür das Bewusstsein geschaffen wird und konkrete Vorgaben und Leitlinien auf Landes- und Einrichtungsebene vereinbart werden. Gute wissenschaftliche Praxis wird durch eine geeignete Infrastruktur unterstützt, welche die heterogenen Bedarfe und Voraussetzungen aller Beteiligten berücksichtigt. Ziele sollten die Institutionalisierung von Forschungsdatenmanagement an den Hochschulen und Kooperationen zwischen den Einrichtungen Brandenburgs sein.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
Bezug nehmend auf Rainer E. Zimmermanns Buch "Metaphysik als Grundlegung von Naturdialektik. Zum Sagbaren und Unsagbaren im spekulativen Denken" wird der von Zimmermann entwickelte Ansatz eines transzendentalen Materialismus in der Traditionslinie Schellingscher Dialektik einerseits und dem Spin-Schaum-Ansatz der Quantengravitationstheorie andererseits erörtert. Die Rückführung von Wirklichkeitsstrukturen auf mathematische Strukturen - auf das Prozessieren von Zahlen - wird problematisiert.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
The optical properties of chromophores, especially organic dyes and optically active inorganic molecules, are determined by their chemical structures, surrounding media, and excited state behaviors. The classical optical go-to techniques for spectroscopic investigations are absorption and luminescence spectroscopy. While both techniques are powerful and easy to apply spectroscopic methods, the limited time resolution of luminescence spectroscopy and its reliance on luminescent properties can make its application, in certain cases, complex, or even impossible. This can be the case when the investigated molecules do not luminesce anymore due to quenching effects, or when they were never luminescent in the first place. In those cases, transient absorption spectroscopy is an excellent and much more sophisticated technique to investigate such systems. This pump-probe laser-spectroscopic method is excellent for mechanistic investigations of luminescence quenching phenomena and photoreactions. This is due to its extremely high time resolution in the femto- and picosecond ranges, where many intermediate or transient species of a reaction can be identified and their kinetic evolution can be observed. Furthermore, it does not rely on the samples being luminescent, due to the active sample probing after excitation. In this work it is shown, that with transient absorption spectroscopy it was possible to identify the luminescence quenching mechanisms and thus luminescence quantum yield losses of the organic dye classes O4-DBD, S4-DBD, and pyridylanthracenes. Hence, the population of their triplet states could be identified as the competitive mechanism to their luminescence. While the good luminophores O4-DBD showed minor losses, the S4-DBD dye luminescence was almost entirely quenched by this process. However, for pyridylanthracenes, this phenomenon is present in both the protonated and unprotonated forms and moderately effects the luminescence quantum yield. Also, the majority of the quenching losses in the protonated forms are caused by additional non-radiative processes introduced by the protonation of the pyridyl rings. Furthermore, transient absorption spectroscopy can be applied to investigate the quenching mechanisms of uranyl(VI) luminescence by chloride and bromide. The reduction of the halides by excited uranyl(VI) leads to the formation of dihalide radicals X^(·−2). This excited state redox process is thus identified as the quenching mechanism for both halides, and this process, being diffusion-limited, can be suppressed by cryogenically freezing the samples or by observing these interactions in media with a lower dielectric constant, such as ACN and acetone.
Trait means or variance
(2021)
One of the few laws in ecology is that communities consist of few common and many rare taxa. Functional traits may help to identify the underlying mechanisms of this community pattern, since they correlate with different niche dimensions. However, comprehensive studies are missing that investigate the effects of species mean traits (niche position) and intraspecific trait variability (ITV, niche width) on species abundance. In this study, we investigated fragmented dry grasslands to reveal trait-occurrence relationships in plants at local and regional scales. We predicted that (a) at the local scale, species occurrence is highest for species with intermediate traits, (b) at the regional scale, habitat specialists have a lower species occurrence than generalists, and thus, traits associated with stress-tolerance have a negative effect on species occurrence, and (c) ITV increases species occurrence irrespective of the scale. We measured three plant functional traits (SLA = specific leaf area, LDMC = leaf dry matter content, plant height) at 21 local dry grassland communities (10 m × 10 m) and analyzed the effect of these traits and their variation on species occurrence. At the local scale, mean LDMC had a positive effect on species occurrence, indicating that stress-tolerant species are the most abundant rather than species with intermediate traits (hypothesis 1). We found limited support for lower specialist occurrence at the regional scale (hypothesis 2). Further, ITV of LDMC and plant height had a positive effect on local occurrence supporting hypothesis 3. In contrast, at the regional scale, plants with a higher ITV of plant height were less frequent. We found no evidence that the consideration of phylogenetic relationships in our analyses influenced our findings. In conclusion, both species mean traits (in particular LDMC) and ITV were differently related to species occurrence with respect to spatial scale. Therefore, our study underlines the strong scale-dependency of trait-abundance relationships.
Information technology and digital solutions as enablers in the tourism sector require continuous development of skills, as digital transformation is characterized by fast change, complexity and uncertainty. This research investigates how a cMOOC concept could support the tourism industry. A consortium of three universities, a tourism association, and a tourist attraction investigates online learning needs and habits of tourism industry stakeholders in the field of digitalization in a cross-border study in the Baltic Sea region. The multi-national survey (n = 244) reveals a high interest in participating in an online learning community, with two-thirds of respondents seeing opportunities to contributing to such community apart from consuming knowledge. The paper demonstrates preferred ways of learning, motivational and hampering aspects as well as types of possible contributions.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
Root water uptake is an essential process for terrestrial plants that strongly affects the spatiotemporal distribution of water in vegetated soil. Fast neutron tomography is a recently established non-invasive imaging technique capable to capture the 3D architecture of root systems in situ and even allows for tracking of three-dimensional water flow in soil and roots. We present an in vivo analysis of local water uptake and transport by roots of soil-grown maize plants—for the first time measured in a three-dimensional time-resolved manner. Using deuterated water as tracer in infiltration experiments, we visualized soil imbibition, local root uptake, and tracked the transport of deuterated water throughout the fibrous root system for a day and night situation. This revealed significant differences in water transport between different root types. The primary root was the preferred water transport path in the 13-days-old plants while seminal roots of comparable size and length contributed little to plant water supply. The results underline the unique potential of fast neutron tomography to provide time-resolved 3D in vivo information on the water uptake and transport dynamics of plant root systems, thus contributing to a better understanding of the complex interactions of plant, soil and water.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
Economists are worried that the lack of property rights to natural capital goods jeopardizes the sustainability of the economic growth miracle that has existed since industrialization. This article questions their position. A vertical innovation model with a portfolio of technologies for abatement, adaptation, and general (Harrod-neutral) technology reveals that environmental damage spillovers have a comparable effect on research profits as technology spillovers so that the social costs of depleting public natural capital are internalized. As long as there is free access to information and technology, growth is sustainable and the allocation of research efforts among alternative technologies is socially optimal. While there still is a need to address externalities from monopolistic research markets, no environmental policy is necessary. These results suggest that environmental externalities may originate in restricted access to information and technology, demonstrating that (i) information has a similar effect as an environmental tax and (ii) knowledge and technology transfers have an impact comparable to that of subsidies for research in green technology.
Different forms of methodological and ontological naturalism constitute the current near-orthodoxy in analytic philosophy. Many prominent figures have called naturalism a (scientific) image (Sellars, W. 1962. “Philosophy and the Scientific Image of Man.” In Wilfrid Sellars, Science, Perception, Reality, 1–40. Ridgeview Publishing), a Weltanschauung (Loewer, B. 2001. “From Physics to Physicalism.” In Physicalism and its Discontents, edited by C. Gillett, and B. Loewer. Cambridge: Cambridge University Press; Stoljar, D. 2010. Physicalism. Routledge), or even a “philosophical ideology” (Kim, J. 2003. “The American Origins of Philosophical Naturalism.” Journal of Philosophical Research 28: 83–98). This suggests that naturalism is indeed something over-and-above an ordinary philosophical thesis (e.g. in contrast to the justified true belief-theory of knowledge). However, these thinkers fail to tease out the host of implications this idea – naturalism being a worldview – presents. This paper draws on (somewhat underappreciated) remarks of Dilthey and Jaspers on the concept of worldviews (Weltanschauung, Weltbild) in order to demonstrate that naturalism as a worldview is a presuppositional background assumption which is left untouched by arguments against naturalism as a thesis. The concluding plea is (in order to make dialectical progress) to re-organize the existing debate on naturalism in a way that treats naturalism not as a first-order philosophical claim, but rather shifts its focus on naturalism’s status as a worldview.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
The COVID-19 pandemic emergency has forced a profound reshape of our lives. Our way of working and studying has been disrupted with the result of an acceleration of the shift to the digital world. To properly adapt to this change, we need to outline and implement new urgent strategies and approaches which put learning at the center, supporting workers and students to further develop “future proof” skills. In the last period, universities and educational institutions have demonstrated that they can play an important role in this context, also leveraging on the potential of Massive Open Online Courses (MOOCs) which proved to be an important vehicle of flexibility and adaptation in a general context characterised by several constraints. From March 2020 till now, we have witnessed an exponential growth of MOOCs enrollments numbers, with “traditional” students interested in different topics not necessarily integrated to their curricular studies. To support students and faculty development during the spreading of the pandemic, Politecnico di Milano focused on one main dimension: faculty development for a better integration of digital tools and contents in the e-learning experience. The current discussion focuses on how to improve the integration of MOOCs in the in-presence activities to create meaningful learning and teaching experiences, thereby leveraging blended learning approaches to engage both students and external stakeholders to equip them with future job relevance skills.
The Role of Interoceptive Sensibility and Emotional Conceptualization for the Experience of Emotions
(2021)
The theory of constructed emotions suggests that different psychological components, including core affect (mental and neural representations of bodily changes), and conceptualization (meaning-making based on prior experiences and semantic knowledge), are involved in the formation of emotions. However, little is known about their role in experiencing emotions. In the current study, we investigated how individual differences in interoceptive sensibility and emotional conceptualization (as potential correlates of these components) interact to moderate three important aspects of emotional experiences: emotional intensity (strength of emotion felt), arousal (degree of activation), and granularity (ability to differentiate emotions with precision). To this end, participants completed a series of questionnaires assessing interoceptive sensibility and emotional conceptualization and underwent two emotion experience tasks, which included standardized material (emotion differentiation task; ED task) and self-experienced episodes (day reconstruction method; DRM). Correlational analysis showed that individual differences in interoceptive sensibility and emotional conceptualization were related to each other. Principal Component Analysis (PCA) revealed two independent factors that were referred to as sensibility and monitoring. The Sensibility factor, interpreted as beliefs about the accuracy of an individual in detecting internal physiological and emotional states, predicted higher granularity for negative words. The Monitoring factor, interpreted as the tendency to focus on the internal states of an individual, was negatively related to emotional granularity and intensity. Additionally, Sensibility scores were more strongly associated with greater well-being and adaptability measures than Monitoring scores. Our results indicate that independent processes underlying individual differences in interoceptive sensibility and emotional conceptualization contribute to emotion experiencing.
Proteins of halophilic organisms that accumulate molar concentrations of KCl in their cytoplasm have much higher content in acidic amino acids than proteins of mesophilic organisms. It has been proposed that this excess is necessary to maintain proteins hydrated in an environment with low water activity: either via direct interactions between water and the carboxylate groups of acidic amino acids or via cooperative interactions between acidic amino acids and hydrated cations, which would stabilize the folded protein. In the course of this Ph.D. study, we investigated these possibilities using atomistic molecular dynamics simulations and classical force fields. High quality parameters describing the interaction between K+ and carboxylate groups present in acidic amino acids are indispensable for this study. We first evaluated the quality of the default parameters for these ions within the widely used AMBER ff14SB force field for proteins and found that they perform poorly. We propose new parameters, which reproduce solution activity derivatives of potassium acetate solutions up to 2 mol/kg and the distances between potassium ions and carboxylate groups observed in x-ray structures of proteins. To understand the role of acidic amino acids in protein hydration, we investigated this aspect for 5 halophilic proteins in comparison with 5 mesophilic ones. Our results do not support the necessity of acidic amino acids to keep folded proteins hydrated. Proteins with a larger fraction of acidic amino acids indeed have higher hydration levels. However, the hydration level of each protein is identical at low (b_KCl = 0.15 mol/kg) and high (b_KCl = 2 mol/kg) KCl concentration. It has also been proposed that cooperative interactions between acidic amino acids with nearby hydrated cations stabilize the folded protein and slow down its solvation shell; according to this theory, the cations would be preferentially excluded from the unfolded structure. We investigate this possibility through extensive free energy calculation simulations. We find that cooperative interactions between neighboring acidic amino acids exist and are mediated by the ions in solution but are present in both folded and unfolded structures of halophilic proteins. The translational dynamics of the solvation shell is barely distinguishable between halophilic and mesophilic proteins; therefore, such a cooperative effect does not result in unusually slow solvent dynamics as has been suggested.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
In Germany, the productivity of professional services, a sector dominated by micro and small firms, declined by 40 percent between 1995 and 2014. This productivity decline also holds true for professional services in other European countries. Using a German firm-level dataset of 700,000 observations between 2003 and 2017, we analyze this largely uncovered phenomenon among professional services, the 4th largest sector in the EU15 business economy, which provide important intermediate services for the rest of the economy. We show that changes in the value chain explain about half of the decline and the increase in part-time employment is a further minor part of the decline. In contrast to expectations, the entry of micro and small firms, despite their lower productivity levels, is not responsible for the decline. We also cannot confirm the conjecture that weakening competition allows unproductive firms to remain in the market.
Background
High prevalence rates have been reported for physical inactivity, mobility limitations, and falls in older adults. Home-based exercise might be an adequate means to increase physical activity by improving health- (i.e., muscle strength) and skill-related components of physical fitness (i.e., balance), particularly in times of restricted physical activity due to pandemics.
Objective
The objective of this study was to examine the effects of home-based balance exercises conducted during daily tooth brushing on measures of balance and muscle strength in healthy older adults.
Methods
Fifty-one older adults were randomly assigned to a balance exercise group (n = 27; age: 65.1 ± 1.1 years) or a passive control group (n = 24; age: 66.2 ± 3.3 years). The intervention group conducted balance exercises over a period of eight weeks twice daily for three minutes each during their daily tooth brushing routine. Pre- and post-intervention, tests were included for the assessment of static steady-state balance (i.e., Romberg test), dynamic steady-state balance (i.e., 10-m single and dual-task walk test using a cognitive and motor interference task), proactive balance (i.e., Timed-Up-and-Go Test [TUG], Functional-Reach-Test [FRT]), and muscle strength (i.e., Chair-Rise-Test [CRT]).
Results
Irrespective of group, the statistical analysis revealed significant main effects for time (pre vs. post) for dual-task gait speed (p < .001, 1.12 ≤ d ≤ 2.65), TUG (p < .001, d = 1.17), FRT (p = .002, d = 0.92), and CRT (p = .002, d = 0.94) but not for single-task gait speed and for the Romberg-Test. No significant group × time interactions were found for any of the investigated variables.
Conclusions
The applied lifestyle balance training program conducted twice daily during tooth brushing routines appears not to be sufficient in terms of exercise dosage and difficulty level to enhance balance and muscle strength in healthy adults aged 60–72 years. Consequently, structured balance training programs using higher exercise dosages and/or more difficult balance tasks are recommended for older adults to improve balance and muscle strength.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
The MOOC-CEDIA Observatory
(2021)
In the last few years, an important amount of Massive Open Online Courses (MOOCS) has been made available to the worldwide community, mainly by European and North American universities (i.e. United States). Since its emergence, the adoption of these educational resources has been widely studied by several research groups and universities with the aim of understanding their evolution and impact in educational models, through the time. In the case of Latin America, data from the MOOC-UC Observatory (updated until 2018) shows that, the adoption of these courses by universities in the region has been slow and heterogeneous. In the specific case of Ecuador, although some data is available, there is lack of information regarding the construction, publication and/or adoption of such courses by universities in the country. Moreover, there are not updated studies designed to identify and analyze the barriers and factors affecting the adoption of MOOCs in the country. The aim of this work is to present the MOOC-CEDIA Observatory, a web platform that offers interactive visualizations on the adoption of MOOCs in Ecuador. The main results of the study show that: (1) until 2020 there have been 99 MOOCs in Ecuador, (2) the domains of MOOCs are mostly related to applied sciences, social sciences and natural sciences, with the humanities being the least covered, (3) Open edX and Moodle are the most widely used platforms to deploy such courses. It is expected that the conclusions drawn from this analysis, will allow the design of recommendations aimed to promote the creation and use of quality MOOCs in Ecuador and help institutions to chart the route for their adoption, both for internal use by their community but also by society in general.
Starting in 2009, the German state of Saxony distributed sports club membership vouchers among all 33,000 third graders in the state. The policy’s objective was to encourage them to develop a long-term habit of exercising. In 2018, we carried out a large register-based survey among several cohorts in Saxony and two neighboring states. Our difference-in-differences estimations show that, even after a decade, awareness of the voucher program was significantly higher in the treatment group. We also find that youth received and redeemed the vouchers. However, we do not find significant short- or long-term effects on sports club membership, physical activity, overweightness, or motor skills.
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
The particle noch (‘still’) can have an additive reading similar to auch (‘also’). We argue that both particles indicate that a previously partially answered QUD is re-opened to add a further answer. The particles differ in that the QUD, in the case of auch, can be re-opened with respect to the same topic situation, whereas noch indicates that the QUD is re-opened with respect to a new topic situation. This account predicts a difference in the accommodation behavior of the two particles. We present an experiment whose results are in line with this prediction.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
Background: Chronic ankle instability, developing from ankle sprain, is one of the most common sports injuries. Besides it being an ankle issue, chronic ankle instability can also cause additional injuries. Investigating the epidemiology of chronic ankle instability is an essential step to develop an adequate injury prevention strategy. However, the epidemiology of chronic ankle instability remains unknown. Therefore, the purpose of this study was to investigate the epidemiology of chronic ankle instability through valid and reliable self-reported tools in active populations.
Methods: An electronic search was performed on PubMed and Web of Science in July 2020. The inclusion criteria for articles were peer-reviewed, published between 2006 and 2020, using one of the valid and reliable tools to evaluate ankle instability, determining chronic ankle instability based on the criteria of the International Ankle
Consortium, and including the outcome of epidemiology of chronic ankle instability. The risk of bias of the included studies was evaluated with an adapted tool for the sports injury review method.
Results: After removing duplicated studies, 593 articles were screened for eligibility. Twenty full-texts were screened and finally nine studies were included, assessing 3804 participants in total. The participants were between 15 and 32 years old and represented soldiers, students, athletes and active individuals with a history of ankle sprain. The prevalence of chronic ankle instability was 25%, ranging between 7 and 53%. The prevalence of chronic ankle instability within participants with a history of ankle sprains was 46%, ranging between 9 and 76%. Five included studies identified chronic ankle instability based on the standard criteria, and four studies applied adapted exclusion criteria to conduct the study. Five out of nine included studies showed a low risk of bias.
Conclusions: The prevalence of chronic ankle instability shows a wide range. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of chronic ankle instability are required. The epidemiology of
CAI should be prospective. Factors affecting the prevalence of chronic ankle instability should be investigated and clearly described.
Implementing innovation laboratories to leverage intrapreneurship are an increasingly popular organizational practice. A typical feature in these creative environments are semi-autonomous teams in which multiple members collectively exert leadership influence, thereby challenging traditional command-and-control conceptions of leadership. An extensive body of research on the team-centric concept of shared leadership has recognized the potential for pluralized leadership structures in enhancing team effectiveness; however, little empirical work has been conducted in organizational contexts in which creativity is key. This study set out to explore antecedents of shared leadership and its influence on team creativity in an innovation lab. Building on extant shared leadership and innovation research, we propose antecedents customary to creative teamwork, that is, experimental culture, task reflexivity, and voice. Multisource data were collected from 104 team members and 49 evaluations of 29 coaches nested in 21 teams working in a prototypical innovation lab. We identify factors specific to creative teamwork that facilitate the emergence of shared leadership by providing room for experimentation, encouraging team members to speak up in the creative process, and cultivating a reflective application of entrepreneurial thinking. We provide specific exemplary activities for innovation lab teams to increase levels of shared leadership.
Background: The standard method to treat physically active patients with anterior cruciate ligament (ACL) rupture is ligament reconstruction surgery. The rehabilitation training program is very important to improve functional performance in recreational athletes following ACL reconstruction.
Objectives: The aims of this study were to compare the effects of three different training programs, eccentric training (ECC), plyometric training (PLYO), or combined eccentric and plyometric training (COMB), on dynamic balance (Y-BAL), the Lysholm Knee Scale (LKS), the return to sport index (RSI), and the leg symmetry index (LSI) for the single leg hop test for distance in elite female athletes after ACL surgery.
Materials and Methods: Fourteen weeks after rehabilitation from surgery, 40 elite female athletes (20.3 ± 3.2 years), who had undergone an ACL reconstruction, participated in a short-term (6 weeks; two times a week) training study. All participants received the same rehabilitation protocol prior to the training study. Athletes were randomly assigned to three experimental groups, ECC (n = 10), PLYO (n = 10), and COMB (n = 10), and to a control group (CON: n = 10). Testing was conducted before and after the 6-week training programs and included the Y-BAL, LKS, and RSI. LSI was assessed after the 6-week training programs only.
Results: Adherence rate was 100% across all groups and no training or test-related injuries were reported. No significant between-group baseline differences (pre-6-week training) were observed for any of the parameters. Significant group-by-time interactions were found for Y-BAL (p < 0.001, ES = 1.73), LKS (p < 0.001, ES = 0.76), and RSI (p < 0.001, ES = 1.39). Contrast analysis demonstrated that COMB yielded significantly greater improvements in Y-BAL, LKS, and RSI (all p < 0.001), in addition to significantly better performances in LSI (all p < 0.001), than CON, PLYO, and ECC, respectively.
Conclusion: In conclusion, combined (eccentric/plyometric) training seems to represent the most effective training method as it exerts positive effects on both stability and functional performance in the post-ACL-surgical rehabilitation period of elite female athletes.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
Promoting the decarbonization of economic activity through climate policies raises many questions. From a macroeconomic perspective, it is important to understand how these policies perform under uncertainty, how they affect short-run dynamics and to what extent they have distributional effects. In addition, uncertainties directly associated with climate policies, such as uncertainty about the carbon budget or emission intensities, become relevant aspects. We study the implications of emission reduction schemes within a Two-Agent New-Keynesian (TANK) model. This quantitative exercise, based on data for the German economy, provides various insights. In the light of frictions and fluctuations, compared to other instruments, a carbon price (i.e. tax) is associated with lower volatility in output and consumption. In terms of aggregate welfare, price instruments are found to be preferable. Conditional on the distribution of revenues from climate policies, quantity instruments can exert regressive effects, posing a larger economic loss on wealth-poor households, whereas price instruments are moderately progressive. Finally, we find that unexpected changes in climate policies can induce substantial aggregate adjustments. With uncertainty about the carbon budget, the costs of adjustment are larger under quantity instruments.
Membrane contact sites are of particular interest in the field of synthetic biology and biophysics. They are involved in a great variety of cellular functions. They form in between two cellular organelles or an organelle and the plasma membrane in order to establish a communication path for molecule transport or signal transmission.
The development of an artificial membrane system which can mimic membrane contact sites using bottom up synthetic biology was the goal of this research study. For this, a multi - compartmentalised giant unilamellar vesicle (GUV) system was created with the membrane of the outer vesicle mimicking the plasma membrane and the inner GUVs posing as cellular organelles.
In the following steps, three different strategies were used to achieve an internal membrane - membrane adhesion.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
The Arctic is greatly impacted by climate change. The increase in air temperature drives the thawing of permafrost and an increase in coastal erosion and river discharge. This leads to a greater input of sediment and organic matter into coastal waters, which substantially impacts the ecosystems by reducing light transmission through the water column and altering the biogeochemistry, but also the subsistence economy of local people, and changes in climate because of the transformation of organic matter into greenhouse gases. Yet, the quantification of suspended sediment in Arctic coastal and nearshore waters remains unsatisfactory due to the absence of dedicated algorithms to resolve the high loads occurring in the close vicinity of the shoreline. In this study we present the Arctic Nearshore Turbidity Algorithm (ANTA), the first reflectance-turbidity relationship specifically targeted towards Arctic nearshore waters that is tuned with in-situ measurements from the nearshore waters of Herschel Island Qikiqtaruk in the western Canadian Arctic. A semi-empirical model was calibrated for several relevant sensors in ocean color remote sensing, including MODIS, Sentinel 3 (OLCI), Landsat 8 (OLI), and Sentinel 2 (MSI), as well as the older Landsat sensors TM and ETM+. The ANTA performed better with Landsat 8 than with Sentinel 2 and Sentinel 3. The application of the ANTA to Sentinel 2 imagery that matches in-situ turbidity samples taken in Adventfjorden, Svalbard, shows transferability to nearshore areas beyond Herschel Island Qikiqtaruk.
Holocene temperature proxy records are commonly used in quantitative synthesis and model-data comparisons. However, comparing correlations between time series from records collected in proximity to one another with the expected correlations based on climate model simulations indicates either regional or noisy climate signals in Holocene temperature proxy records. In this study, we evaluate the consistency of spatial correlations present in Holocene proxy records with those found in data from the Last Glacial Maximum (LGM). Specifically, we predict correlations expected in LGM proxy records if the only difference to Holocene correlations would be due to more time uncertainty and more climate variability in the LGM. We compare this simple prediction to the actual correlation structure in the LGM proxy records. We found that time series data of ice-core stable isotope records and planktonic foraminifera Mg/Ca ratios were consistent between the Holocene and LGM periods, while time series of Uk'37 proxy records were not as we found no correlation between nearby LGM records. Our results support the finding of highly regional or noisy marine proxy records in the compilation analysed here and suggest the need for further studies on the role of climate proxies and the processes of climate signal recording and preservation.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Background: To handle the competition demands, sparring drills are used for specific technical–tactical training as well as physical–physiological conditioning in combat sports. While the effects of different area sizes and number of within-round sparring partners on physiological and perceptive responses in combats sports were examined in previous studies, technical and tactical aspects were not investigated. This study investigated the effect of different within-round sparring partners number (i.e., at a time; 1 vs. 1, 1 vs. 2, and 1 vs. 4) and area sizes (2 m × 2 m, 4 m × 4 m, and 6 m × 6 m) variation on the technical–tactical aspects of small combat games in kickboxing.
Method: Twenty male kickboxers (mean ± standard deviation, age: 20.3 ± 0.9 years), regularly competing in regional and national events randomly performed nine different kickboxing combats, lasting 2 min each. All combats were video recorded and analyzed using the software Dartfish.
Results: Results showed that the total number of punches was significantly higher in 1 versus 4 compared with 1 versus 1 (p = 0.011, d = 0.83). Further, the total number of kicks was significantly higher in 1 versus 4 compared with 1 versus 1 and 1 versus 2 (p < 0.001; d = 0.99 and d = 0.83, respectively). Moreover, the total number of kick combinations was significantly higher in 1 versus 4 compared with 1 versus 1 and 1 versus 2 (p < 0.001; d = 1.05 and d = 0.95, respectively). The same outcome was significantly lower in 2 m × 2 m compared with 4 m × 4 m and 6 m × 6 m areas (p = 0.010 and d = − 0.45; p < 0.001 and d = − 0.6, respectively). The number of block-and-parry was significantly higher in 1 versus 4 compared with 1 versus 1 (p < 0.001, d = 1.45) and 1 versus 2 (p = 0.046, d = 0.61) and in 2 m × 2 m compared with 4 m × 4 m and 6 × 6 m areas (p < 0.001; d = 0.47 and d = 0.66, respectively). Backwards lean actions occurred more often in 2 m × 2 m compared with 4 m × 4 m (p = 0.009, d = 0.53) and 6 m × 6 m (p = 0.003, d = 0.60). However, the number of foot defenses was significantly lower in 2 m × 2 m compared with 6 m × 6 m (p < 0.001, d = 1.04) and 4 m × 4 m (p = 0.004, d = 0.63). Additionally, the number of clinches was significantly higher in 1 versus 1 compared with 1 versus 2 (p = 0.002, d = 0.7) and 1 versus 4 (p = 0.034, d = 0.45).
Conclusions: This study provides practical insights into how to manipulate within-round sparring partners’ number and/or area size to train specific kickboxing technical–tactical fundamentals.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
Development of chronic pain after a low back pain episode is associated with increased pain sensitivity, altered pain processing mechanisms and the influence of psychosocial factors. Although there is some evidence that multimodal therapy (such as behavioral or motor control therapy) may be an important therapeutic strategy, its long-term effect on pain reduction and psychosocial load is still unclear. Prospective longitudinal designs providing information about the extent of such possible long-term effects are missing. This study aims to investigate the long-term effects of a homebased uni- and multidisciplinary motor control exercise program on low back pain intensity, disability and psychosocial variables. 14 months after completion of a multicenter study comparing uni- and multidisciplinary exercise interventions, a sample of one study center (n = 154) was assessed once more. Participants filled in questionnaires regarding their low back pain symptoms (characteristic pain intensity and related disability), stress and vital exhaustion (short version of the Maastricht Vital Exhaustion Questionnaire), anxiety and depression experiences (the Hospital and Anxiety Depression Scale), and pain-related cognitions (the Fear Avoidance Beliefs Questionnaire). Repeated measures mixed ANCOVAs were calculated to determine the long-term effects of the interventions on characteristic pain intensity and disability as well as on the psychosocial variables. Fifty four percent of the sub-sample responded to the questionnaires (n = 84). Longitudinal analyses revealed a significant long-term effect of the exercise intervention on pain disability. The multidisciplinary group missed statistical significance yet showed a medium sized long-term effect. The groups did not differ in their changes of the psychosocial variables of interest. There was evidence of long-term effects of the interventions on pain-related disability, but there was no effect on the other variables of interest. This may be partially explained by participant's low comorbidities at baseline. Results are important regarding costless homebased alternatives for back pain patients and prevention tasks. Furthermore, this study closes the gap of missing long-term effect analysis in this field.
New cryogels for selective dye removal from aqueous solution were prepared by free radical polymerization from the highly water-soluble crosslinker N,N,N’,N’-tetramethyl-N,N’-bis(2-ethylmethacrylate)-propyl-1,3-diammonium dibromide and the sulfobetaine monomer 2-(N-3-sulfopropyl-N,N-dimethyl ammonium)ethyl methacrylate. The resulting white and opaque cryogels have micrometer sized pores with a smaller substructure. They adsorb methyl orange (MO) but not methylene blue (MB) from aqueous solution. Mixtures of MO and MB can be separated through selective adsorption of the MO to the cryogels while the MB remains in solution. The resulting cryogels are thus candidates for the removal of hazardous organic substances, as exemplified by MO and MB, from water. Clearly, it is possible that the cryogels are also potentially interesting for removal of other compounds such as pharmaceuticals or pesticides, but this must be investigated further.
N-of-1 trials are the gold standard study design to evaluate individual treatment effects and derive personalized treatment strategies. Digital tools have the potential to initiate a new era of N-of-1 trials in terms of scale and scope, but fully functional platforms are not yet available. Here, we present the open source StudyU platform, which includes the StudyU Designer and StudyU app. With the StudyU Designer, scientists are given a collaborative web application to digitally specify, publish, and conduct N-of-1 trials. The StudyU app is a smartphone app with innovative user-centric elements for participants to partake in trials published through the StudyU Designer to assess the effects of different interventions on their health. Thereby, the StudyU platform allows clinicians and researchers worldwide to easily design and conduct digital N-of-1 trials in a safe manner. We envision that StudyU can change the landscape of personalized treatments both for patients and healthy individuals, democratize and personalize evidence generation for self-optimization and medicine, and can be integrated in clinical practice.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
Strong as a Hippo’s Heart: Biomechanical Hippo Signaling During Zebrafish Cardiac Development
(2021)
The heart is comprised of multiple tissues that contribute to its physiological functions. During development, the growth of myocardium and endocardium is coupled and morphogenetic processes within these separate tissue layers are integrated. Here, we discuss the roles of mechanosensitive Hippo signaling in growth and morphogenesis of the zebrafish heart. Hippo signaling is involved in defining numbers of cardiac progenitor cells derived from the secondary heart field, in restricting the growth of the epicardium, and in guiding trabeculation and outflow tract formation. Recent work also shows that myocardial chamber dimensions serve as a blueprint for Hippo signaling-dependent growth of the endocardium. Evidently, Hippo pathway components act at the crossroads of various signaling pathways involved in embryonic zebrafish heart development. Elucidating how biomechanical Hippo signaling guides heart morphogenesis has direct implications for our understanding of cardiac physiology and pathophysiology.
Das Manuskript dient der Vorbereitung der Prüfung der Fachkunde zum Strahlenschutz für Lehrer. Es enthält wichtige Grundlagen der Kernphysik, insbesondere die Eigenschaften der Alpha-, Beta-, Gamma-, Neutronen- und Röntgenstrahlen. Es folgt eine kurze Beschreibung des Einflusses der Strahlung auf belebte Materie. Wichtige Paragrafen der Strahlenschutzverordnung werden beschrieben. Eine Aufgabensammlung dient zur Illustration und Übung.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.