Refine
Has Fulltext
- yes (522) (remove)
Year of publication
- 2021 (522) (remove)
Document Type
- Doctoral Thesis (165)
- Postprint (153)
- Article (94)
- Part of Periodical (26)
- Working Paper (19)
- Master's Thesis (18)
- Monograph/Edited Volume (15)
- Review (10)
- Bachelor Thesis (7)
- Report (7)
- Other (3)
- Habilitation Thesis (2)
- Conference Proceeding (1)
- Course Material (1)
- Sound (1)
Keywords
- USA (10)
- United States (9)
- moderne jüdische Geschichte (9)
- Christian Gottfried Ehrenberg (8)
- 20. Jahrhundert (7)
- 20th century (7)
- modern Jewish history (7)
- Logopädie (6)
- Zeitschrift (6)
- 19. Jahrhundert (5)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (50)
- Extern (47)
- Institut für Biochemie und Biologie (45)
- Institut für Geowissenschaften (39)
- Institut für Physik und Astronomie (34)
- Institut für Chemie (28)
- Institut für Romanistik (28)
- Institut für Umweltwissenschaften und Geographie (21)
- Vereinigung für Jüdische Studien e. V. (21)
- Department Psychologie (19)
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
In der vorliegenden Arbeit wurde sich mit der Frage beschäftigt, ob und wenn ja inwiefern das spanische que, neben seinen klassischen Verwendungsweisen als Pronomen und Konjunktion, als Diskursmarker (DM) fungieren kann, also ob que in bestimmten Kontexten seinen propositionalen Gehalt verliert und rein diskursive Funktionen übernimmt.
Es wurden 128 Beispiele von satzinitialen que untersucht, welche sich zunächst nicht eindeutig als grammatisches Element klassifizieren lassen. Die Beispiele entstammen einem Korpus, welches auf einem auf Grundlage der zweiten Staffel der Netflix-Serie “Élite” erstellten Transkript basiert. Das Material wurde anhand von fünf auf Grundlage der Forschungsliteratur erstellten Kriterien analysiert und je nach Erfüllung oder Nicht-Erfüllung in die Kategorien “nicht pragmatikalisiert” (NP), “teilweise pragmatikalisiert” (TP) und “pragmatikalisiert” (P) eingeordnet. Innerhalb jeder dieser Kategorien wurde(n) die entsprechende(n) grammatische(n) bzw. pragmatische(n) Funktion(en) spezifiziert und die Ergebnisse in einem Raster zusammengetragen. Für die Funktionszuordnung in der Kategorie (P) wurde hierbei auf die DM-Klassifizierung von Martín Zorraquino und Portolés 1999 zurückgegriffen und hierbei teilweise noch einmal weiter spezifiziert.
Bei der Analyse haben sich 89 als P, 34 als TP und fünf Beispiele als NP herausgestellt. Von den 89 als P eingestuften que wurde der Großteil (84) als “comentador” beschrieben - als DM, der einen Kommentar einführt. So wurden insgesamt 72 que als DM eingestuft, die einen erklärenden Kommentar einleiten.
Es wurde hiermit eine objektive Einstufung von que als DM erreicht, welche gleichzeitig erste Aufschlüsse über die spezifischen Funktionen von que als DM gibt. Die Nutzung konkreter Kriterien zur Analyse von potentiellen DM gewährleistet Objektivität und leistet einen Beitrag zur Systematisierung der teils von Uneinigkeiten und Interpretationen geprägten DM-Forschung.
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
Mildred Harnack, geb. Fish, stammte ursprünglich aus Milwaukee, Wisconsin. Zusammen mit ihrem Ehemann Arvid Harnack zog sie nach Deutschland und lebte seit 1930 in Berlin. Hier lehrte die Literaturwissenschaftlerin an der Friedrich-Wilhelms-Universität (heute Humboldt-Universität) und am Berliner Abendgymnasium (heute Peter A. Silbermann-Schule). Bereits kurz nach der Machtübernahme von Adolf Hitler hatte sich um das Ehepaar Harnack ein Kreis von Freunden gebildet, der gegen die Herrschaft der Nationalsozialisten opponierte. Dazu zählten auch Karl Behrens und Bodo Schlösinger, die beide Schüler Mildred Harnacks am Berliner Abendgymnasium waren. Mildred Harnack konnte mit Hilfe ihrer Kontakte zur amerikanischen Botschaft ihren Schülern im nationalsozialistischen Deutschland ansonsten nicht zugängliche Informationen besorgen.
Aufgrund von Funkkontakten des Freundeskreises zur Sowjetunion wurde die Gruppe von den Nationalsozialisten Rote Kapelle genannt – „rot“ bezog sich auf deren linke Haltung und mit „Kapelle“ wurden Funker assoziiert, die wie Pianisten in einer Kapelle spielen. Der Berliner Oppositionszirkel umfasste bis zu seiner Zerschlagung durch die Nationalsozialisten etwa 150 Personen verschiedenster Berufsgruppen, unterschiedlicher parteipolitischer Einstellungen und Konfessionen. Die Gruppe verfertigte oppositionelle Flugblätter und lieferte Informationen an die amerikanische Botschaft sowie an die Sowjetunion. Mildred Harnack wurde – wie viele ihrer Mitstreiterinnen und Mitstreiter – nach ihrer Verhaftung vom Reichskriegsgericht zum Tode verurteilt und am 16. Februar 1943 in Plötzensee guillotiniert.
In diesem Band stellen Studierende der Universität Potsdam sowie Hörerinnen und Hörer der Peter A. Silbermann-Schule (Berlin) nach einem kurzen Überblick zum Widerstand gegen den Nationalsozialismus in Deutschland das Netzwerk der Roten Kapelle sowie die Biographien von Mildred Harnack und ihren Schülern Karl Behrens und Bodo Schlösinger vom Berliner Abendgymnasium eindrücklich vor.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Das Rahmenkonzept der Universitätsschule Potsdam beschreibt die Wertegrundlage und das pädagogisch-didaktische sowie das wissenschaftliche Fundament einer zu gründenden Universitätsschule Potsdam. Wie andere Universitätsschulen soll sich auch diese Schule durch eine enge und institutionalisierte Beziehung zwischen Schule und Universität auszeichnen, die den ständigen Wissenstransfer zwischen Schulpraxis, Wissenschaft, Lehrkräftebildung und Schulverwaltung unterstützt. Das Rahmenkonzept legt die Grundlagen für eine inklusive Schule, deren Schüler:innen einen Querschnitt der Gesellschaft abbilden, und die in ungleichheitssensiblen Bildungsangeboten alle Bildungsabschlüsse des Landes Brandenburg anbietet. Die Universitätsschule soll den starken Segregationsprozessen in Potsdam entgegenwirken.
Im Leitbild werden die Grundwerte (Nachhaltigkeit, Inklusion und Bildungsgerechtigkeit, Menschenrechte und Demokratie, Gemeinschaft, Ganzheitlichkeit) und die Bildungsziele (Transferfähigkeit, kritisch-reflexives Denken und lebensbegleitendes Lernen, Diversitätsbewusstsein und Transkulturalität, Selbstkompetenz und Beziehungskompetenz, Kulturtechniken und digitale Kompetenz) der Universitätsschule dargestellt. Das Pädagogische Konzept veranschaulicht, wie Werte und Bildungsziele in den Bereichen Schulform, Schulkultur, Lernkultur sowie Lernorte und Lernumgebung ausgestaltet werden können. Schließlich wird die Universitätsschule als lernende und lehrende Institution beschrieben, die ein Ort des Transfers von Bildungsinnovationen ist. Dafür soll eine Transferwerkstatt in der Schule verankert werden, die den Wissensaustausch der schulrelevanten Akteur:innen unterstützt und gestaltet.
In den vergangenen Jahren hat sich die Politikdidaktik zunehmend mit dem Einsatz von Narrationen im Politikunterricht beschäftigt, denn neben Sachtexten bietet auch die Belletristik die Möglichkeit, sich mit politischen Themen auseinanderzusetzen. Insbesondere die Literatur von Ferdinand von Schirach hat in den letzten Jahren zunehmend Anklang in der Gesellschaft gefunden. Von Schirachs Texte greifen gesellschaftskritische Themen auf, beleuchten diese aus verschiedenen Perspektiven und fordern zur Meinungsbildung heraus. Aus diesem Grund weisen von Schirachs Narrationen ein hohes Potential für die Politische Bildung auf. Politische Bildung schließt auch die Rechterziehung ein. Der Fall Collini von Ferdinand von Schirach setzt sich sowohl mit rechtlichen, als auch mit politischen Themen im Sinne der Rechtserziehung auseinander. In der vorliegenden Masterarbeit wird der Frage nachgegangen, inwieweit der Roman Der Fall Collini von Ferdinand von Schirach als Narration eine Chance für politisch-rechtliches Lernen im Politikunterricht darstellt. Um die Forschungsfrage zu beantworten, werden die Lernchancen und -grenzen des Romans hinsichtlich seiner Thematik und seines Genres, sowie durch den Roman geförderten Kompetenzen herausgearbeitet und die durch ihn möglichen fächerübergreifenden Bezüge verdeutlicht. Durch die Auseinandersetzung mit von Schirachs Werk beschäftigen sich die Schülerinnen und Schüler mit politisch-rechtlichen Themen, wie dem Spannungsverhältnis von Recht und Gerechtigkeit, dem Ablauf von Strafgerichtsverfahren, dem theoretischen Anspruch des Rechtsstaates und dessen realen Schwächen. Zudem fördert die Auseinandersetzung mit dem Roman Der Fall Collini die vier fachbezogenen Kompetenzen der Politischen Bildung, sowie Multiperspektivität und exemplarisches Lernen. Des Weiteren verknüpft der Roman historische, politisch-rechtliche und moralisch-ethische Aspekte miteinander, wodurch fächerübergreifende Bezüge mit den Fächern Geschichte, Deutsch und L-E-R hergestellt werden können. Darüber hinaus spricht der Justizroman als Narration seine Leserinnen und Leser auch emotional an und fördert somit eine ganzheitliche und nachhaltige Wissensvermittlung im Sinne der Rechtserziehung. Es hat sich gezeigt, dass Der Fall Collini von Ferdinand von Schirach sich für die unterrichtliche Beschäftigung innerhalb der Politischen Bildung besonders eignet.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.
Confidence Counts
(2021)
The increasing reliance on online learning in higher education has been further expedited by the on-going Covid-19 pandemic. Students need to be supported as they adapt to this new learning environment. Research has established that learners with positive online learning self-efficacy beliefs are more likely to persevere and achieve their higher education goals when learning online. In this paper, we explore how MOOC design can contribute to the four sources of self-efficacy beliefs posited by Bandura [4]. Specifically, we will explore, drawing on learner reflections, whether design elements of the MOOC, The Digital Edge: Essentials for the Online Learner, provided participants with the necessary mastery experiences, vicarious experiences, verbal persuasion, and affective regulation opportunities, to evaluate and develop their online learning self-efficacy beliefs. Findings from a content analysis of discussion forum posts show that learners referenced three of the four information sources when reflecting on their experience of the MOOC. This paper illustrates the potential of MOOCs as a pedagogical tool for enhancing online learning self-efficacy among students.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Trait means or variance
(2021)
One of the few laws in ecology is that communities consist of few common and many rare taxa. Functional traits may help to identify the underlying mechanisms of this community pattern, since they correlate with different niche dimensions. However, comprehensive studies are missing that investigate the effects of species mean traits (niche position) and intraspecific trait variability (ITV, niche width) on species abundance. In this study, we investigated fragmented dry grasslands to reveal trait-occurrence relationships in plants at local and regional scales. We predicted that (a) at the local scale, species occurrence is highest for species with intermediate traits, (b) at the regional scale, habitat specialists have a lower species occurrence than generalists, and thus, traits associated with stress-tolerance have a negative effect on species occurrence, and (c) ITV increases species occurrence irrespective of the scale. We measured three plant functional traits (SLA = specific leaf area, LDMC = leaf dry matter content, plant height) at 21 local dry grassland communities (10 m × 10 m) and analyzed the effect of these traits and their variation on species occurrence. At the local scale, mean LDMC had a positive effect on species occurrence, indicating that stress-tolerant species are the most abundant rather than species with intermediate traits (hypothesis 1). We found limited support for lower specialist occurrence at the regional scale (hypothesis 2). Further, ITV of LDMC and plant height had a positive effect on local occurrence supporting hypothesis 3. In contrast, at the regional scale, plants with a higher ITV of plant height were less frequent. We found no evidence that the consideration of phylogenetic relationships in our analyses influenced our findings. In conclusion, both species mean traits (in particular LDMC) and ITV were differently related to species occurrence with respect to spatial scale. Therefore, our study underlines the strong scale-dependency of trait-abundance relationships.
Starkregen in Berlin
(2021)
In den Sommern der Jahre 2017 und 2019 kam es in Berlin an mehreren Orten zu Überschwemmungen in Folge von Starkregenereignissen. In beiden Jahren führte dies zu erheblichen Beeinträchtigungen im Alltag der Berliner:innen sowie zu hohen Sachschäden. Eine interdisziplinäre Taskforce des DFG-Graduiertenkollegs NatRiskChange untersuchte (1) die meteorologischen Eigenschaften zweier besonders eindrücklicher Unwetter, sowie (2) die Vulnerabilität der Berliner Bevölkerung gegenüber Starkregen.
Eine vergleichende meteorologische Rekonstruktion der Starkregenereignisse von 2017 und 2019 ergab deutliche Unterschiede in der Entstehung und den Überschreitungswahrscheinlichkeiten der beiden Unwetter. So war das Ereignis von 2017 mit einer relativ großen räumlichen Ausdehnung und langer Dauer ein untypisches Starkregenereignis, während es sich bei dem Unwetter von 2019 um ein typisches, kurzzeitiges Starkregenereignis mit ausgeprägter räumlicher Heterogenität handelte. Eine anschließende statistische Analyse zeigte, dass das Ereignis von 2017 für längere Niederschlagsdauern (>=24 h) als großflächiges Extremereignis mit Überschreitungswahrscheinlichkeiten von unter 1 % einzuordnen ist (d.h. Wiederkehrperioden >=100 Jahre). Im Jahr 2019 wurden dagegen ähnliche Überschreitungswahrscheinlichkeiten nur lokal und für kürzere Zeiträume (1-2 h) berechnet.
Die Vulnerabilitätsanalyse basiert auf einer von April bis Juni 2020 in Berlin durchgeführten Onlinebefragung. Diese richtete sich an Personen, die bereits von vergangenen Starkregenereignissen betroffen waren und thematisierte das Schadensereignis selbst, daraus entstandene Beeinträchtigungen und Schäden, Risikowahrnehmung sowie Notfall- und Vorsorgemaßnahmen. Die erhobenen Umfragedaten (n=102) beziehen sich vornehmlich auf die Ereignisse von 2017 und 2019 und zeigen, dass die Berliner Bevölkerung sowohl im Alltag (z.B. bei der Beschaffung von Lebensmitteln) als auch im eigenen Haushalt (z.B. durch Überschwemmungsschäden) von den Unwettern beeinträchtigt war. Zudem deuteten die Antworten der Betroffenen auf Möglichkeiten hin, die Vulnerabilität der Gesellschaft gegenüber Starkregen weiter zu reduzieren - etwa durch die Unterstützung besonders betroffener Gruppen (z.B. Pflegende), durch gezielte Informationskampagnen zum Schutz vor Starkregen oder durch die Erhöhung der Reichweite von Unwetterwarnungen. Eine statistische Analyse zur Effektivität privater Notfall- und Vorsorgemaßnahmen auf Grundlage der Umfragedaten bestätigte vorherige Studienergebnisse.
So gab es Anhaltspunkte dafür, dass durch das Umsetzen von Vorsorgemaßnahmen wie beispielsweise das Installieren von Rückstauklappen, Barriere-Systemen oder Pumpen Starkregenschäden reduziert werden können.
Die Ergebnisse dieses Berichts unterstreichen die Notwendigkeit für ein integriertes Starkregenrisikomanagment, das die Risikokomponenten Gefährdung, Vulnerabilität und Exposition ganzheitlich und auf mehreren Ebenen (z.B. staatlich, kommunal, privat) betrachtet.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
Digitale Logopädie
(2021)
Learning During COVID-19
(2021)
During the COVID-19 pandemic, learning in higher education and beyond shifted en masse to online formats, with the short- and long-term consequences for Massive Open Online Course (MOOC) platforms, learners, and creators still under evaluation. In this paper, we sought to determine whether the COVID-19 pandemic and this shift to online learning led to increased learner engagement and attainment in a single introductory biology MOOC through evaluating enrollment, proportional and individual engagement, and verification and performance data. As this MOOC regularly operates each year, we compared these data collected from two course runs during the pandemic to three pre-pandemic runs. During the first pandemic run, the number and rate of learners enrolling in the course doubled when compared to prior runs, while the second pandemic run indicated a gradual return to pre-pandemic enrollment. Due to higher enrollment, more learners viewed videos, attempted problems, and posted to the discussion forums during the pandemic. Participants engaged with forums in higher proportions in both pandemic runs, but the proportion of participants who viewed videos decreased in the second pandemic run relative to the prior runs. A higher percentage of learners chose to pursue a certificate via the verified track in each pandemic run, though a smaller proportion earned certification in the second pandemic run. During the pandemic, more enrolled learners did not necessarily correlate to greater engagement by all metrics. While verified-track learner performance varied widely during each run, the effects of the pandemic were not uniform for learners, much like in other aspects of life. As such, individual engagement trends in the first pandemic run largely resemble pre-pandemic metrics but with more learners overall, while engagement trends in the second pandemic run are less like pre-pandemic metrics, hinting at learner “fatigue”. This study serves to highlight the life-long learning opportunity that MOOCs offer is even more critical when traditional education modes are disrupted and more people are at home or unemployed. This work indicates that this boom in MOOC participation may not remain at a high level for the longer term in any one course, but overall, the number of MOOCs, programs, and learners continues to grow.
Carbon Adsorbents from Spent Coffee for Removal of
Methylene Blue and Methyl Orange from Water
(2021)
Activated carbons (ACs) were prepared from dried spent coffee (SCD), a biological waste product, to produce adsorbents for methylene blue (MB) and methyl orange (MO) from aqueous solution. Pre-pyrolysis activation of SCD was achieved via treatment of the SCD with aqueous sodium hydroxide solutions at 90 °C. Pyrolysis of the pretreated SCD at 500 °C for 1 h produced powders with typical characteristics of AC suitable and effective for dye adsorption. As an alternative to the rather harsh base treatment, calcium carbonate powder, a very common and abundant resource, was also studied as an activator. Mixtures of SCD and CaCO3 (1:1 w/w) yielded effective ACs for MO and MB removal upon pyrolysis needing only small amounts of AC to clear the solutions. A selectivity of the adsorption process toward anionic (MO) or cationic (MB) dyes was not observed.
Emotions are a complex concept and they are present in our everyday life. Persons on the autism spectrum are said to have difficulties in social interactions, showing deficits in emotion recognition in comparison to neurotypically developed persons. But social-emotional skills are believed to be positively augmented by training. A new adaptive social cognition training tool “E.V.A.” is introduced which teaches emotion recognition from face, voice and body language. One cross-sectional and one longitudinal study with adult neurotypical and autistic participants were conducted. The aim of the cross-sectional study was to characterize the two groups and see if differences in their social-emotional skills exist. The longitudinal study, on the other hand, aimed for detecting possible training effects following training with the new training tool. In addition, in both studies usability assessments were conducted to investigate the perceived usability of the new tool for neurotypical as well as autistic participants. Differences were found between autistic and neurotypical participants in their social-emotional and emotion recognition abilities. Training effects for neurotypical participants in an emotion recognition task were found after two weeks of home training. Similar perceived usability was found for the neurotypical and autistic participants. The current findings suggest that persons with ASC do not have a general deficit in emotion recognition, but are in need for more time to correctly recognize emotions. In addition, findings suggest that training emotion recognition abilities is possible. Further studies are needed to verify if the training effects found for neurotypical participants also manifest in a larger ASC sample.
We use a quantitative spatial equilibrium model to evaluate the distributional and welfare impacts of a recent temporary rent control policy in Berlin, Germany. We calibrate the model to key features of Berlin’s housing market, in particular the recent gentrification of inner city locations. As expected, gentrification benefits rich homeowners, while poor renter households lose. Our counterfactual analysis mimicks the rent control policy. We find that this policy reduces welfare for rich and poor households and in fact, the percentage change in welfare is largest for the poorest households. We also study alternative affordable housing policies such as subsidies and re-zoning policies, which are better suited to address the adverse consequences of gentrification.
In C3 plants, CO2 diffuses into the leaf and is assimilated by the Calvin-Benson cycle in the mesophyll cells. It leaves Rubisco open to its side reaction with O2, resulting in a wasteful cycle known as photorespiration. A sharp fall in atmospheric CO2 levels about 30 million years ago have further increased the side reaction with O2. The pressure to reduce photorespiration led, in over 60 plant genera, to the evolution of a CO2-concentrating mechanism called C4 photosynthesis; in this mode, CO2 is initially incorporated into 4-carbon organic acids, which diffuse to the bundle sheath and are decarboxylated to provide CO2 to Rubisco. Some genera, like Flaveria, contain several species that represent different steps in this complex evolutionary process. However, the majority of terrestrial plant species did not evolve a CO2-concentrating mechanism and perform C3 photosynthesis.
This thesis compares photosynthetic metabolism in several species with C3, C4 and intermediate modes of photosynthesis. Metabolite profiling and stable isotope labelling were performed to detect inter-specific differences changes in metabolite profile and, hence, how a pathway operates. The results obtained were subjected to integrative data analyses like hierarchical clustering and principal component analysis, and were deepened by correlation analyses to uncover specific metabolic features and reaction steps that were conserved or differed between species.
The main findings are that Calvin-Benson cycle metabolite profiles differ between C3 and C4 species and between different C3 species, including a very different response to rising irradiance in Arabidopsis and rice. These findings confirm Calvin-Benson cycle operation diverged between C3 and C4 species and, most unexpectedly, even between different C3 species. Moreover, primary metabolic profiles supported the current C4 evolutionary model in the genus Flaveria and also provided new insights and opened up new questions. Metabolite profiles also point toward a progressive adjustment of the Calvin-Benson cycle during the evolution of C4 photosynthesis. Overall, this thesis point out the importance of a metabolite-centric approach to uncover underlying differences in species apparently sharing the same photosynthetic routes and as a valid method to investigate evolutionary transition between C3 and C4 photosynthesis.
Strong as a Hippo’s Heart: Biomechanical Hippo Signaling During Zebrafish Cardiac Development
(2021)
The heart is comprised of multiple tissues that contribute to its physiological functions. During development, the growth of myocardium and endocardium is coupled and morphogenetic processes within these separate tissue layers are integrated. Here, we discuss the roles of mechanosensitive Hippo signaling in growth and morphogenesis of the zebrafish heart. Hippo signaling is involved in defining numbers of cardiac progenitor cells derived from the secondary heart field, in restricting the growth of the epicardium, and in guiding trabeculation and outflow tract formation. Recent work also shows that myocardial chamber dimensions serve as a blueprint for Hippo signaling-dependent growth of the endocardium. Evidently, Hippo pathway components act at the crossroads of various signaling pathways involved in embryonic zebrafish heart development. Elucidating how biomechanical Hippo signaling guides heart morphogenesis has direct implications for our understanding of cardiac physiology and pathophysiology.
Portal = 30
(2021)
Wie schreibt man ein Editorial zum 30-jährigen Bestehen der Universität Potsdam, wenn man selbst doch erst seit drei Jahren zu ihr gehört? Vielleicht wäre es am einfachsten, die vielen Menschen zu zitieren, die uns für diese Ausgabe ihre interessanten Geschichten erzählt haben. Die die Universität mit zu dem entwickelten, was sie heute auszeichnet, zum Beispiel in der Lehrerbildung. Die uns „als Urgestein der UP“ Rede und Antwort standen und authentisch über die Schwierigkeiten der Anfangszeit berichteten. Oder die Alumni, die vier sehr verschiedene Jahrzehnte Studierendenleben reflektierten. Natürlich ließe sich auch die gastierende Prominenz aufzählen, die die Universität im Laufe der Zeit besucht hat. Oder man vermittelt gleich einen Ausblick auf künftige Projekte, etwa zur Transformation des Potsdamer Hochschulstandortes.
Stattdessen habe ich mich entschieden, Sie, liebe Leserinnen und Leser, hier noch etwas weiter zurück mitzunehmen – in die Vorwendezeit. Als ich in den 1980er Jahren in Westberlin die Schulbank drückte, war die Gegenwart eine kindlich geprägte, eine naiv angenehme – zwar frontal unterrichtet, mit viel Zucker und wenig Bio, dafür aber gepaart mit dem unwiederbringlichen Charme des Prä-digitalen. Ich wuchs unmittelbar angrenzend an Potsdam auf, im südwestlichen Bezirk Zehlendorf, und doch war Potsdam die große Unbekannte hinter dem Kontrollpunkt Dreilinden, jenseits von Havel und Teltowkanal, unerreichbar und versperrt mit Schranken und Panzerkreuzen auf der Glienicker Brücke.
Als Westberlinerin hatte ich die Freie Universität Berlin unweigerlich vor Augen, ihr Name war Programm. Wir waren frei, die da drüben waren es nicht. Während es zur beschaulichen Normalität des Zehlendorfs der 1980er Jahre gehörte, dass westalliierte Panzer die Clayallee entlangrollten, Macht und Freiheit demonstrierend, und der deutschlandweit erste McDonalds Drive-In eröffnete, bildete die DDR im Jenseits, direkt hinter dem Mauerstreifen in Griebnitzsee, ihre Rechts- und Verwaltungseliten aus. In Golm formte die Stasi ihre Juristen, an der Pädagogischen Hochschule studierten Lehrerinnen und Lehrer fürs ganze Land. Ein zwiespältigeres Bild kann man kaum zeichnen, die deutsche Teilung übertraf jeden Roman.
Im Hinblick auf das Aufeinandertreffen zweier Welten durch die Wiedervereinigung erscheinen die darauffolgenden Herausforderungen der 1990er Jahre, die Bildungsinstitutionen in Ostdeutschland wie die Uni Potsdam in ihrer Gründungsphase zu lösen hatten, verständlicher: Unterschiedliche Erwartungen, andere Perspektiven bzw. in den Lebenswelten begründete Erfahrungen mussten jetzt in ein System gegossen werden. Auch können die Transformationen vor dem Hintergrund der einst so gegensätzlichen Ausgangslage von Westund Ostdeutschland anders eingeordnet werden: So mögen 30 Jahre im internationalen Vergleich für eine Universität wenig sein und sie als jung gelten lassen. Andererseits bedeuten sie mit Blick auf die enorme Umwälzung der (ostdeutschen) Lebenswelten einen riesigen Kraftakt mit so vielen Entwicklungen, mit erfüllten wie geplatzten Träumen, dass sie einen staunen lassen, was in dieser Zeit geschafft und geleistet wurde.
Insofern freue ich mich über die Artikel dieser Ausgabe, über alle Erinnerungen, Erkenntnisse und Erzählungen der Menschen aus erster Hand. Es wäre schade gewesen, hätten wir ihre Gedanken und Geschichten nicht aufgeschrieben, denn genau diese haben die Uni Potsdam seit 1991 zu dem gemacht, was sie in 2021 ist.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.
Soziale Medien sind ein wesentlicher Bestandteil des Alltags von Schüler*innen und gleichzeitig zunehmend wichtig in Wirtschaft, Politik und Wissenschaft. Am Beispiel von Twitter zeigt dieser Beitrag, dass soziale Medien im Unterricht auch für die Beantwortung geographischer Fragestellungen verwendet werden können. Hierfür eignen sich Twitter-Daten aufgrund ihrer Georeferenzierung und weiterer interessanter Inhalte besonders. Der Beitrag gibt einen Überblick über die Verwendung von Twitter für sozialwissenschaftliche und humangeographische Fragestellungen und reflektiert die Nutzung von Twitter im Unterricht. Für die Unterrichtspraxis werden Beispiele zu den Themen Braunkohle, Flutereignisse und Raumwahrnehmungen sowie Anleitungen zur Auswertung, Anwendung und Reflexion von Twitter-Analysen vorgestellt.
Economists are worried that the lack of property rights to natural capital goods jeopardizes the sustainability of the economic growth miracle that has existed since industrialization. This article questions their position. A vertical innovation model with a portfolio of technologies for abatement, adaptation, and general (Harrod-neutral) technology reveals that environmental damage spillovers have a comparable effect on research profits as technology spillovers so that the social costs of depleting public natural capital are internalized. As long as there is free access to information and technology, growth is sustainable and the allocation of research efforts among alternative technologies is socially optimal. While there still is a need to address externalities from monopolistic research markets, no environmental policy is necessary. These results suggest that environmental externalities may originate in restricted access to information and technology, demonstrating that (i) information has a similar effect as an environmental tax and (ii) knowledge and technology transfers have an impact comparable to that of subsidies for research in green technology.
Background: We assessed the effects of gender, in association with a four-week small-sided games (SSGs) training program, during Ramadan intermitting fasting (RIF) on changes in psychometric and physiological markers in professional male and female basketball players.
Methods: Twenty-four professional basketball players from the first Tunisian (Tunisia) division participated in this study. The players were dichotomized by sex (males [GM = 12]; females [GF = 12]). Both groups completed a 4 weeks SSGs training program with 3 sessions per week. Psychometric (e.g., quality of sleep, fatigue, stress, and delayed onset of muscle soreness [DOMS]) and physiological parameters (e.g., heart rate frequency, blood lactate) were measured during the first week (baseline) and at the end of RIF (post-test).
Results: Post hoc tests showed a significant increase in stress levels in both groups (GM [− 81.11%; p < 0.001, d = 0.33, small]; GF [− 36,53%; p = 0.001, d = 0.25, small]). Concerning physiological parameters, ANCOVA revealed significantly lower heart rates in favor of GM at post-test (1.70%, d = 0.38, small, p = 0.002).
Conclusions: Our results showed that SSGs training at the end of the RIF negatively impacted psychometric parameters of male and female basketball players. It can be concluded that there are sex-mediated effects of training during RIF in basketball players, and this should be considered by researchers and practitioners when programing training during RIF.
The evolution of life on Earth has been driven by disturbances of different types and magnitudes over the 4.6 million years of Earth’s history (Raup, 1994, Alroy, 2008). One example for such disturbances are mass extinctions which are characterized by an exceptional increase in the extinction rate affecting a great number of taxa in a short interval of geologic time (Sepkoski, 1986). During the 541 million years of the Phanerozoic, life on Earth suffered five exceptionally severe mass extinctions named the “Big Five Extinctions”. Many mass extinctions are linked to changes in climate
(Feulner, 2009). Hence, the study of past mass extinctions is not only intriguing, but can also provide insights into the complex nature of the Earth system. This thesis aims at deepening our understanding of the triggers of mass extinctions and how they affected life. To accomplish this, I investigate changes in climate during two of the Big Five extinctions using a coupled climate model.
During the Devonian (419.2–358.9 million years ago) the first vascular plants and vertebrates evolved on land while extinction events occurred in the ocean (Algeo et al., 1995). The causes of these formative changes, their interactions and their links to changes in climate are still poorly understood. Therefore, we explore the sensitivity of the Devonian climate to various boundary conditions using an intermediate-complexity climate model (Brugger et al., 2019). In contrast to Le Hir et al. (2011), we find only a minor biogeophysical effect of changes in vegetation cover due to unrealistically high soil albedo values used in the earlier study. In addition, our results cannot support the strong influence of orbital parameters on the Devonian climate, as simulated with a climate model with a strongly simplified ocean model (De Vleeschouwer et al., 2013, 2014, 2017). We can only reproduce the changes in Devonian climate suggested by proxy data by decreasing atmospheric CO2. Still, finding agreement between the evolution of sea surface temperatures reconstructed from proxy data (Joachimski et al., 2009) and our simulations remains challenging and suggests a lower δ18O ratio of Devonian seawater. Furthermore, our study of the sensitivity of the Devonian climate reveals a prevailing mode of climate variability on a timescale of decades to centuries. The quasi-periodic ocean temperature fluctuations are linked to a physical mechanism of changing sea-ice cover, ocean convection and overturning in high northern latitudes.
In the second study of this thesis (Dahl et al., under review) a new reconstruction of atmospheric CO2 for the Devonian, which is based on CO2-sensitive carbon isotope fractionation in the earliest vascular plant fossils, suggests a much earlier drop of atmo- spheric CO2 concentration than previously reconstructed, followed by nearly constant CO2 concentrations during the Middle and Late Devonian. Our simulations for the Early Devonian with identical boundary conditions as in our Devonian sensitivity study (Brugger et al., 2019), but with a low atmospheric CO2 concentration of 500 ppm, show no direct conflict with available proxy and paleobotanical data and confirm that under the simulated climatic conditions carbon isotope fractionation represents a robust proxy for atmospheric CO2. To explain the earlier CO2 drop we suggest that early forms of vascular land plants have already strongly influenced weathering. This new perspective on the Devonian questions previous ideas about the climatic conditions and earlier explanations for the Devonian mass extinctions.
The second mass extinction investigated in this thesis is the end-Cretaceous mass extinction (66 million years ago) which differs from the Devonian mass extinctions in terms of the processes involved and the timescale on which the extinctions occurred. In the two studies presented here (Brugger et al., 2017, 2021), we model the climatic effects of the Chicxulub impact, one of the proposed causes of the end-Cretaceous extinction, for the first millennium after the impact. The light-dimming effect of stratospheric sulfate aerosols causes severe cooling, with a decrease of global annual mean surface air temperature of at least 26◦C and a recovery to pre-impact temperatures after more than 30 years. The sudden surface cooling of the ocean induces deep convection which brings nutrients from the deep ocean via upwelling to the surface ocean. Using an ocean biogeochemistry model we explore the combined effect of ocean mixing and iron-rich dust originating from the impactor on the marine biosphere. As soon as light levels have recovered, we find a short, but prominent peak in marine net primary productivity. This newly discovered mechanism could result in toxic effects for marine near-surface ecosystems. Comparison of our model results to proxy data (Vellekoop et al., 2014, 2016, Hull et al., 2020) suggests that carbon release from the terrestrial biosphere is required in addition to the carbon dioxide which can be attributed to the target material. Surface ocean acidification caused by the addition of carbon dioxide and sulfur is only moderate. Taken together, the results indicate a significant contribution of the Chicxulub impact to the end-Cretaceous mass extinction by triggering multiple stressors for the Earth system.
Although the sixth extinction we face today is characterized by human intervention in nature, this thesis shows that we can gain many insights into future extinctions from studying past mass extinctions, such as the importance of the rate of change (Rothman, 2017), the interplay of multiple stressors (Gunderson et al., 2016), and changes in the carbon cycle (Rothman, 2017, Tierney et al., 2020).
MOOCs have been produced using a variety of instructional design approaches and frameworks. This paper presents experiences from the instructional approach based on the ADDIE model applied to designing and producing MOOCs in the Erasmus+ strategic partnership on Open Badge Ecosystem for Research Data Management (OBERRED). Specifically, this paper describes the case study of the production of the MOOC “Open Badges for Open Science”, delivered on the European MOOC platform EMMA. The key goal of this MOOC is to help learners develop a capacity to use Open Badges in the field of Research Data Management (RDM). To produce the MOOC, the ADDIE model was applied as a generic instructional design model and a systematic approach to the design and development following the five design phases: Analysis, Design, Development, Implementation, Evaluation. This paper outlines the MOOC production including methods, templates and tools used in this process including the interactive micro-content created with H5P in form of Open Educational Resources and digital credentials created with Open Badges and issued to MOOC participants upon successful completion of MOOC levels. The paper also outlines the results from qualitative evaluation, which applied the cognitive walkthrough methodology to elicit user requirements. The paper ends with conclusions about pros and cons of using the ADDIE model in MOOC production and formulates recommendations for further work in this area.
Role of dietary sulfonates in the stimulation of gut bacteria promoting intestinal inflammation
(2021)
The interplay between intestinal microbiota and host has increasingly been recognized as a major factor impacting health. Studies indicate that diet is the most influential determinant affecting the gut microbiota. A diet rich in saturated fat was shown to stimulate the growth of the colitogenic bacterium Bilophila wadsworthia by enhancing the secretion of the bile acid taurocholate (TC). The sulfonated taurine moiety of TC is utilized as a substrate by B. wadsworthia. The resulting overgrowth of B. wadsworthia was accompanied by an increased incidence and severity of colitis in interleukin (IL)-10-deficient mice, which are genetically prone to develop inflammation.
Based on these findings, the question arose whether the intake of dietary sulfonates also stimulates the growth of B. wadsworthia and thereby promotes intestinal inflammation in genetically susceptible mice. Dietary sources of sulfonates include green vegetables and cyanobacteria, which contain the sulfolipids sulfoquinovosyl diacylglycerols (SQDG) in considerable amounts. Based on literature reports, the gut commensal Escherichia coli is able to release sulfoquinovose (SQ) from SQDG and in further steps, convert SQ to 2,3-dihydroxypropane-1-sulfonate (DHPS) and dihydroxyacetone phosphate. DHPS may then be utilized as a growth substrate by B. wadsworthia, which results in the formation of sulfide. Both, sulfide formation and a high abundance of B. wadsworthia have been associated with intestinal inflammation.
In the present study, conventional IL-10-deficient mice were fed either a diet supplemented with the SQDG-rich cyanobacterium Spirulina (20%, SD) or a control diet. In addition SQ, TC, or water were orally applied to conventional or gnotobiotic IL-10-deficient mice. The gnotobiotic mice harbored a simplified human intestinal microbiota (SIHUMI) either with or without B. wadsworthia. During the intervention period, the body weight of the mice was monitored, the colon permeability was assessed and fecal samples were collected. After the three-week intervention, the animals were examined with regard to inflammatory parameters, microbiota composition and sulfonate concentrations in different intestinal sites.
None of the mice treated with the above-mentioned sulfonates showed weight loss or intestinal inflammation. Solely mice fed SD or gavaged with TC displayed a slight immune response. These mice also displayed an altered microbiota composition, which was not observed in mice gavaged with SQ. The abundance of B. wadsworthia was strongly reduced in mice fed SD, while that of mice treated with SQ or TC was in part slightly increased. The intestinal SQ-concentration was elevated in mice orally treated with SD or SQ, whereas neither TC nor taurine concentrations were consistently elevated in mice gavaged with TC. Additional colonization of SIHUMI mice with B. wadsworthia resulted in a mild inflammatory response, but only in mice treated with TC. In general, TC-mediated effects on the immune system and abundance of B. wadsworthia were not as strong as described in the literature.
In summary, neither the tested dietary sulfonates nor TC led to bacteria-induced intestinal inflammation in the IL-10-deficient mouse model, which was consistently observed in both conventional and gnotobiotic mice. For humans, this means that foods containing SQDG, such as spinach or Spirulina, do not increase the risk of intestinal inflammation.
Background
The anterior cruciate ligament (ACL) rupture can lead to impaired knee function. Reconstruction decreases the mechanical instability but might not have an impact on sensorimotor alterations.
Objective
Evaluation of the sensorimotor function measured with the active joint position sense (JPS) test in anterior cruciate ligament (ACL) reconstructed patients compared to the contralateral side and a healthy control group.
Methods
The databases MEDLINE, CINAHL, EMBASE, PEDro, Cochrane Library and SPORTDiscus were systematically searched from origin until April 2020. Studies published in English, German, French, Spanish or Italian language were included. Evaluation of the sensorimotor performance was restricted to the active joint position sense test in ACL reconstructed participants or healthy controls. The Preferred Items for Systematic Reviews and Meta-Analyses guidelines were followed. Study quality was evaluated using the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Data was descriptively synthesized.
Results
Ten studies were included after application of the selective criteria. Higher angular deviation, reaching significant difference (p < 0.001) in one study, was shown up to three months after surgery in the affected limb. Six months post-operative significantly less error (p < 0.01) was found in the reconstructed leg compared to the contralateral side and healthy controls. One or more years after ACL reconstruction significant differences were inconsistent along the studies.
Conclusions
Altered sensorimotor function was present after ACL reconstruction. Due to inconsistencies and small magnitudes, clinical relevance might be questionable. JPS testing can be performed in acute injured persons and prospective studies could enhance knowledge of sensorimotor function throughout the rehabilitative processes.
Die vorliegende Publikation der Dissertationsschrift „Nutzungsfokussierte Evaluation in der Lehrkräftefortbildung Belcantare Brandenburg für musikunterrichtende Grundschul-lehrer*innen im ländlichen Raum“ ist eine akteursorientierte, explorativ angelegte Evaluation. Seit 2011 führt in den Regionen des Landes Brandenburg der Landesmusikrat Brandenburg e.V. in Kooperation mit mehreren Institutionen die zweijährige Fortbildung für fachnah sowie ausgebildete Musiklehrkräfte im Kompetenzfeld Singen und Lieddidaktik durch.
Der zugrunde liegende Evaluationsansatz stellt die Interessen der kooperierenden Partner, welche praktische Konsequenzen aus den Ergebnissen der Evaluation zu ziehen beabsichtigen, in den Mittelpunkt der Forschungsarbeit. Es handelt sich somit um eine Auftragsforschung. Der Evaluation kommen die Funktionen zu, die inhaltliche Qualität der Lehrkräftefortbildung zu sichern und zu optimieren, den Erkenntnisgewinn zur Gestaltung eines fachdidaktischen Coachings zu erweitern, die Forschungsergebnisse zur Legitimation und Partizipation sichtbar zu machen sowie analytische Entscheidungshilfe zur Weiterführung Belcantare Brandenburgs nach 2022 bereitzustellen.
Die von den Akteuren an die Autorin herangetragenen Forschungsanliegen wurden zu vier Fragestellungen zusammengefasst:
1. Wie zufrieden sind die Teilnehmenden mit der Veranstaltungsreihe?
2. Welche fachlichen, didaktischen und persönlichen Entwicklungen stellen sich während des Fortbildungszeitraumes aus der Wahrnehmungsperspektive der teilnehmenden Lehrkräfte ein?
3. Wie beurteilen die Coaching-Beteiligten die Chancen und Grenzen des musikdidaktischen Coachings als Fortbildungsform?
4. Welche Schlussfolgerungen lassen sich hinsichtlich professioneller Lehrkräftefortbildung aus der Gegenüberstellung der empirischen Erkenntnisse mit denen der Theorie ziehen?
Diese Forschungsfragen wurden in zwei Forschungsphasen beantwortet:
1. Der empirische Datenkorpus wurde zwischen 2011-2015 gebildet. In dieser Zeit hatten zur projektbegleitenden Qualitätssicherung und -weiterführung der Pilot- und Folgestaffel Belcantare Brandenburgs die Forschungsfragen 1, 2 und 3 besondere Relevanz. Die Evaluationsstudie ist explorativ angelegt: Die Variablen zu den Forschungsfragen 1 und 2 sind durch Dokumentenanalysen sowie Interview-auswertungen mit der Projektleitung und teilnehmenden Lehrkräften sukzessive herausgearbeitet. Ebenso entsprechen die halb-geschlossenen Fragebögen als zentrale Erhebungsinstrumente der Forschungsfragen 1 und 2 dem explorativen Charakter und stellen auf diesem Weg sicher, dass den Teilnehmer*innen (N=40) die Möglichkeit zum Einbringen eigener Perspektiven eingeräumt wurde. Mit der Gesamtnote „sehr gut“ (1,39) seitens der befragten Lehrkräfte gilt die Gestaltung der Veranstaltungsreihe als ein Best-Practice-Beispiel: Für die Lehrkräfte sind das handlungsorientierte Erarbeiten von schülerpassenden und thematisch geeigneten, unmittelbar einsetzbaren oder wiederholt geübten Unterrichtsinhalten, Lerngegenständen und dazu passenden Materialien für den Unterricht die wesentlichen Kriterien zur Nutzung einer solchen Professionalisierungsmaßnahme. Die Lehrkräfteentwicklungen beider beforschter Staffeln zeigen, dass die fachnahen Kräfte bei sich größere Entwicklungszuwächse nach Beendigung des Projektes wahrnehmen als die Fachkräfte. Gleichzeitig liegt die selbsteingeschätzte Fachkompetenz der fachnahen Kräfte zu Fortbildungsende unter denen der Fachkräfte.
Der Forschungsfrage 3 liegt ein ausschließlich qualitatives Design (N=16) zugrunde. Im Ergebnis konnten die Offene Form fachdidaktischen Coachings definiert werden, deren Parameter beschrieben und wesentliche Eigenschaften von Coach-Constellationen für ein binnendifferenziertes Coaching in der Lehrkräftefortbildung benannt werden.
2. Im Mai 2019 bildete sich aufgrund des sich verschärfenden Fachkräftemangels in Brandenburg das Bestreben der Kooperationspartner heraus, die Lehrkräftefortbildung nach 2022 als qualitätssichernde Maßnahme fortführen zu wollen. Diese Situation führte 2019 zur Aufnahme der Forschungsfrage 4, die eine umfassende und aktualisierte Analyse der theoretischen und bildungspolitischen Hintergründe der Intervention implizierte, mit dem Ziel, den Erkenntnisstand der Evaluation für eine erneute Empfehlung zu vertiefen. Das Thematisieren sowie das Gestalten von Selbstlernprozessen in der professionalisierenden Lehrkräftefortbildung stellte sich hierbei als ein zentrales Merkmal innovativer Lernkultur heraus.
Die Publikation gliedert sich in vier Teile: Teil I stellt den Forschungsstand zur professionalisierenden Lehrkräfte¬fortbildung aus bildungswissenschaftlicher und musikpäda-gogischer Perspektive dar. Teil II der Arbeit stellt die komplexen Begründungs-zusammenhänge zum Evaluationsgegenstand her. Im III. Teil der Arbeit ist die Evaluationsstudie zu finden. Deren induktiv erschlossene Erkenntnisse werden in Teil IV der Arbeit dem Forschungsstand zur professionalisierenden Lehrkräftefortbildung gegenübergestellt.
Als langjähriges und aktives Mitglied der Gesellschaft Naturforschender Freunde zu Berlin bestimmte Ehrenberg ihre Aktivitäten und ihr Ansehen maßgeblich mit. Seine Beiträge zu den Sitzungen und Schriften und seine Rolle als Bewohner des gesellschaftseigenen Hauses förderten in der GNF sowohl Fortschritt als auch Beständigkeit. Das Ziel der Naturgeschichte des 18. Jahrhunderts, also die Entdeckung, Beschreibung und Bewahrung möglichst aller Organismen- und Gesteinsarten, wurde in der GNF bis in die zweite Hälfte des 19. Jahrhunderts tradiert. Ehrenbergs wissenschaftliche Leistungen, insbesondere seine zahlreichen Entdeckungen mit Hilfe des Mikroskops, stehen ganz in dieser Tradition.
Northern range margin populations of the European fire-bellied toad (Bombina bombina) have rapidly declined during recent decades. Extensive agricultural land use has fragmented the landscape, leading to habitat disruption and loss, as well as eutrophication of ponds. In Northern Germany (Schleswig-Holstein) and Southern Sweden (Skåne), this population decline resulted in decreased gene flow from surrounding populations, low genetic diversity, and a putative reduction in adaptive potential, leaving populations vulnerable to future environmental and climatic changes. Previous studies using mitochondrial control region and nuclear transcriptome-wide SNP data detected introgressive hybridization in multiple northern B. bombina populations after unreported release of toads from Austria. Here, we determine the impact of this introgression by comparing the body conditions (proxy for fitness) of introgressed and nonintrogressed populations and the genetic consequences in two candidate genes for putative local adaptation (the MHC II gene as part of the adaptive immune system and the stress response gene HSP70 kDa). We detected regional differences in body condition and observed significantly elevated levels of within individual MHC allele counts in introgressed Swedish populations, associated with a tendency toward higher body weight, relative to regional nonintrogressed populations. These differences were not observed among introgressed and nonintrogressed German populations. Genetic diversity in both MHC and HSP was generally lower in northern than Austrian populations. Our study sheds light on the potential benefits of translocations of more distantly related conspecifics as a means to increase adaptive genetic variability and fitness of genetically depauperate range margin populations without distortion of local adaptation.
Due to their isolated and often fragmented nature, range margin populations are especially vulnerable to rapid environmental change. To maintain genetic diversity and adaptive potential, gene flow from disjunct populations might therefore be crucial to their survival. Translocations are often proposed as a mitigation strategy to increase genetic diversity in threatened populations. However, this also includes the risk of losing locally adapted alleles through genetic swamping. Human-mediated translocations of southern lineage specimens into northern German populations of the endangered European fire-bellied toad (Bombina bombina) provide an unexpected experimental set-up to test the genetic consequences of an intraspecific introgression from central population individuals into populations at the species range margin. Here, we utilize complete mitochondrial genomes and transcriptome nuclear data to reveal the full genetic extent of this translocation and the consequences it may have for these populations. We uncover signs of introgression in four out of the five northern populations investigated, including a number of introgressed alleles ubiquitous in all recipient populations, suggesting a possible adaptive advantage. Introgressed alleles dominate at the MTCH2 locus, associated with obesity/fat tissue in humans, and the DSP locus, essential for the proper development of epidermal skin in amphibians. Furthermore, we found loci where local alleles were retained in the introgressed populations, suggesting their relevance for local adaptation. Finally, comparisons of genetic diversity between introgressed and nonintrogressed northern German populations revealed an increase in genetic diversity in all German individuals belonging to introgressed populations, supporting the idea of a beneficial transfer of genetic variation from Austria into North Germany.
While a growing body of literature finds positive impacts of Start-Up Subsidies (SUS) on labor market outcomes of participants, little is known about how the design of these programs shapes their effectiveness and hence how to improve policy. As experimental variation in program design is unavailable, we exploit the 2011 reform of the current German SUS program for the unemployed which strengthened case-workers’ discretionary power, increased entry requirements and reduced monetary support. We estimate the impact of the reform on the program’s effectiveness using samples of participants and non-participants from before and after the reform. To control for time-constant unobserved heterogeneity as well as differential selection patterns based on observable characteristics over time, we combine Difference-in-Differences with inverse probability weighting using covariate balancing propensity scores. Holding participants’ observed characteristics as well as macroeconomic conditions constant, the results suggest that the reform was successful in raising employment effects on average. As these findings may be contaminated by changes in selection patterns based on unobserved characteristics, we assess our results using simulation-based sensitivity analyses and find that our estimates are highly robust to changes in unobserved characteristics. Hence, the reform most likely had a positive impact on the effectiveness of the program, suggesting that increasing entry requirements and reducing support in-creased the program’s impacts while reducing the cost per participant.
In many countries, women are over-represented among low-wage employees, which is why a wage floor could benefit them particularly. Following this notion, we analyse the impact of the German minimum wage introduction in 2015 on the gender wage gap. Germany poses an interesting case study in this context, since it has a rather high gender wage gap and set the minimum wage at a relatively high level, affecting more than four million employees. Based on individual data from the Structure of Earnings Survey, containing information for over one million employees working in 60,000 firms, we use a difference-in- difference framework that exploits regional differences in the bite of the minimum wage. We find a significant negative effect of the minimum wage on the regional gender wage gap. Between 2014 and 2018, the gap at the 10th percentile of the wage distribution was reduced by 4.6 percentage points (or 32%) in regions that were strongly affected by the minimum wage compared to less affected regions. For the gap at the 25th percentile, the effect still amounted to -18%, while for the mean it was smaller (-11%) and not particularly robust. We thus find that the minimum wage can indeed reduce gender wage disparities. While the effect is highest for the low-paid, it also reaches up into higher parts of the wage distribution.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
Die Masterarbeit beschäftigt sich auf der theoretischen Grundlage der Entwicklung der Mensch-Tier-Beziehung sowie der Entstehung der Human-Animal Studies (HAS) mit der Frage, welche Möglichkeiten es gibt, um das Bewusstsein der Menschen für einen moralischen und sensiblen Umgang mit Tieren zu sensibilisieren. Konkret geht die vorliegende Arbeit der Frage nach, wie die Mensch-Tier-Beziehung und die Rechte der Tiere Bestandteil des Politikunterrichts werden können. Um das gewaltige Repertoire an Möglichkeiten, das sich mit dieser Überlegung öffnet, einzugrenzen, wurde die Mensch-Tier-Beziehung schwerpunktmäßig anhand der (konventionellen) Nutztierhaltung untersucht.
Das Ergebnis der Arbeit zeigt, dass die Thematik der Mensch-Tier-Beziehung im Allgemeinen sowie die entwickelte Unterrichtskonzeption im Konkreten für den Unterricht der Politischen Bildung geeignet sind. Darüber hinaus konnte die Erkenntnis gewonnen werden, dass die Thematik vielfältige Anknüpfungspunkte sowohl für den Politikunterricht als auch für weitere Unterrichtsfächer bietet.
The COVID-19 pandemic emergency has forced a profound reshape of our lives. Our way of working and studying has been disrupted with the result of an acceleration of the shift to the digital world. To properly adapt to this change, we need to outline and implement new urgent strategies and approaches which put learning at the center, supporting workers and students to further develop “future proof” skills. In the last period, universities and educational institutions have demonstrated that they can play an important role in this context, also leveraging on the potential of Massive Open Online Courses (MOOCs) which proved to be an important vehicle of flexibility and adaptation in a general context characterised by several constraints. From March 2020 till now, we have witnessed an exponential growth of MOOCs enrollments numbers, with “traditional” students interested in different topics not necessarily integrated to their curricular studies. To support students and faculty development during the spreading of the pandemic, Politecnico di Milano focused on one main dimension: faculty development for a better integration of digital tools and contents in the e-learning experience. The current discussion focuses on how to improve the integration of MOOCs in the in-presence activities to create meaningful learning and teaching experiences, thereby leveraging blended learning approaches to engage both students and external stakeholders to equip them with future job relevance skills.
The Earth's electron radiation belts exhibit a two-zone structure, with the outer belt being highly dynamic due to the constant competition between a number of physical processes, including acceleration, loss, and transport. The flux of electrons in the outer belt can vary over several orders of magnitude, reaching levels that may disrupt satellite operations. Therefore, understanding the mechanisms that drive these variations is of high interest to the scientific community.
In particular, the important role played by loss mechanisms in controlling relativistic electron dynamics has become increasingly clear in recent years. It is now widely accepted that radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause, called magnetopause shadowing. Precipitation of electrons occurs due to pitch-angle scattering by resonant interaction with various types of waves, including whistler mode chorus, plasmaspheric hiss, and electromagnetic ion cyclotron waves. In addition, the compression of the magnetopause due to increases in solar wind dynamic pressure can substantially deplete electrons at high L shells where they find themselves in open drift paths, whereas electrons at low L shells can be lost through outward radial diffusion. Nevertheless, the role played by each physical process during electron flux dropouts still remains a fundamental puzzle.
Differentiation between these processes and quantification of their relative contributions to the evolution of radiation belt electrons requires high-resolution profiles of phase space density (PSD). However, such profiles of PSD are difficult to obtain due to restrictions of spacecraft observations to a single measurement in space and time, which is also compounded by the inaccuracy of instruments. Data assimilation techniques aim to blend incomplete and inaccurate spaceborne data with physics-based models in an optimal way. In the Earth's radiation belts, it is used to reconstruct the entire radial profile of electron PSD, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment.
In this study, sparse measurements from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 are assimilated into the three-dimensional Versatile Electron Radiation Belt (VERB-3D) diffusion model, by means of a split-operator Kalman filter over a four-year period from 01 October 2012 to 01 October 2016. In comparison to previous works, the 3D model accounts for more physical processes, namely mixed pitch angle-energy diffusion, scattering by EMIC waves, and magnetopause shadowing. It is shown how data assimilation, by means of the innovation vector (the residual between observations and model forecast), can be used to account for missing physics in the model. This method is used to identify the radial distances from the Earth and the geomagnetic conditions where the model is inconsistent with the measured PSD for different values of the adiabatic invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and this is interpreted as evidence of where and when additional source or loss processes are active.
Furthermore, two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons are investigated: EMIC wave-induced scattering and magnetopause shadowing. The innovation vector is inspected for values of the invariant mu ranging from 300 to 3000 MeV/G, and a statistical analysis is performed to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. The results of this work are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. EMIC wave scattering dominates loss at lower L shells and it may amount to between 10%/hr to 30%/hr of the maximum value of PSD over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
The results of this study are two-fold. Firstly, it demonstrates that the 3D data assimilative code provides a comprehensive picture of the radiation belts and is an important step toward performing reanalysis using observations from current and future missions. Secondly, it achieves a better understanding and provides critical clues of the dominant loss mechanisms responsible for the rapid dropouts of electrons at different locations over the outer radiation belt.
There is a general consensus that diverse ecological communities are better equipped to adapt to changes in their environment, but our understanding of the mechanisms by which they do so remains incomplete. Accurately predicting how the global biodiversity crisis affects the functioning of ecosystems, and the services they provide, requires extensive knowledge about these mechanisms.
Mathematical models of food webs have been successful in uncovering many aspects of the link between diversity and ecosystem functioning in small food web modules, containing at most two adaptive trophic levels. Meaningful extrapolation of this understanding to the functioning of natural food webs remains difficult, due to the presence of complex interactions that are not always accurately captured by bitrophic descriptions of food webs. In this dissertation, we expand this approach to tritrophic food web models by including the third trophic level. Using a functional trait approach, coexistence of all species is ensured using fitness-balancing trade-offs. For example, the defense-growth trade-off implies that species may be defended against predation, but this defense comes at the cost of a lower maximal growth rate. In these food webs, the functional diversity on a given trophic level can be varied by modifying the trait differences between the species on that level.
In the first project, we find that functional diversity promotes high biomass on the top level, which, in turn, leads to a reduction in the temporal variability due to compensatory dynamical patterns governed by the top level. Next, these results are generalized by investigating the average behavior of tritrophic food webs, for wide intervals of all parameters describing species interactions in the food web. We find that the diversity on the top level is most important for determining the biomass and temporal variability of all other trophic levels, and show how biomass is only transferred efficiently to the top level when diversity is high everywhere in the food web. In the third project, we compare the response of a simple food chain against a nutrient pulse perturbation, to that of a food web with diversity on every trophic level. By joint consideration of the resistance, resilience, and elasticity, we uncover that the response is efficiently buffered when biomass on the top level is high, which is facilitated by functional diversity on every trophic level in the food web. Finally, in the fourth project, we show that even in a simple consumer-resource model without any diversity, top-down control on the intermediate level frequently causes the phase difference between the intermediate and basal level to deviate from the quarter-cycle lag rule. By adding a top predator, we show that these deviations become even more likely, and anti-phase cycles are often observed.
The combined results of these projects show how the properties of the top trophic level, including its functional diversity, have a decisive influence on the functioning of tritrophic food webs from a mechanistic perspective. Because top species are often among the most vulnerable to extinction, our results emphasize the importance of their conservation in ecosystem management and restoration strategies.
Eukaryotic cells can be regarded as complex microreactors capable of performing various biochemical reactions in parallel which are necessary to sustain life. An essential prerequisite for these complex metabolic reactions to occur is the evolution of lipid membrane-bound organelles enabling compartmental- ization of reactions and biomolecules. This allows for a spatiotemporal control over the metabolic reactions within the cellular system. Intracellular organi- zation arising due to compartmentalization is a key feature of all living cells and has inspired synthetic biologists to engineer such systems with bottom-up approaches.
Artificial cells provide an ideal platform to isolate and study specific re- actions without the interference from the complex network of biomolecules present in biological cells. To mimic the hierarchical architecture of eukaryotic cells, multi-compartment assemblies with nested liposomal structures also re- ferred to as multi-vesicular vesicles (MVVs) have been widely adopted. Most of the previously reported multi-compartment systems adopt bulk method- ologies which suffer from low yield and poor control over size. Microfluidic strategies help circumvent these issues and facilitate a high-throughput and robust technique to assemble MVVs of uniform size distribution.
In this thesis, firstly, the bulk methodologies are explored to build MVVs and implement a synthetic signalling cascade. Next, a polydimethylsiloxane (PDMS)-based microfluidic platform is introduced to build MVVs and the significance of PEGylated lipids for the successful encapsulation of inner com- partments to generate stable multi-compartment systems is highlighted.
Next, a novel two-inlet channel PDMS-based microfluidic device to create MVVs encompassing a three-step enzymatic reaction cascade is presented. A directed reaction pathway comprising of the enzymes α-glucosidase (α-Glc), glucose oxidase (GOx), and horseradish peroxidase (HRP) spanning across three compartments via reconstitution of size-selective membrane proteins is described. Furthermore, owing to the monodispersity of our MVVs due to microfluidic strategies, this platform is employed to study the effect of com- partmentalization on reaction kinetics.
Further integration of cell-free expression module into the MVVs would allow for gene-mediated signal transduction within artificial eukaryotic cells. Therefore, the chemically inducible cell-free expression of a membrane protein alpha-hemolysin and its further reconstitution into liposomes is carried out.
In conclusion, the present thesis aims to build artificial eukaryotic cells to achieve size-selective chemical communication that also show potential for applications as micro reactors and as vehicles for drug delivery.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
Organisms often employ ecophysiological strategies to exploit environmental conditions and ensure bio-energetic success. However, the many complexities involved in the differential expression and flexibility of these strategies are rarely fully understood. Therefore, for the first time, using a three-part cross-disciplinary laboratory experimental analysis, we investigated the diversity and plasticity of photoresponsive traits employed by one family of environmentally contrasting, ecologically important phytoflagellates. The results demonstrated an extensive inter-species phenotypic diversity of behavioural, physiological, and compositional photoresponse across the Chlamydomonadaceae, and a multifaceted intra-species phenotypic plasticity, involving a broad range of beneficial photoacclimation strategies, often attributable to environmental predisposition and phylogenetic differentiation. Deceptively diverse and sophisticated strong (population and individual cell) behavioural photoresponses were observed, with divergence from a general preference for low light (and flexibility) dictated by intra-familial differences in typical habitat (salinity and trophy) and phylogeny. Notably, contrasting lower, narrow, and flexible compared with higher, broad, and stable preferences were observed in freshwater vs. brackish and marine species. Complex diversity and plasticity in physiological and compositional photoresponses were also discovered. Metabolic characteristics (such as growth rates, respiratory costs and photosynthetic capacity, efficiency, compensation and saturation points) varied elaborately with species, typical habitat (often varying more in eutrophic species, such as Chlamydomonas reinhardtii), and culture irradiance (adjusting to optimise energy acquisition and suggesting some propensity for low light). Considerable variations in intracellular pigment and biochemical composition were also recorded. Photosynthetic and accessory pigments (such as chlorophyll a, xanthophyll-cycle components, chlorophyll a:b and chlorophyll a:carotenoid ratios, fatty acid content and saturation ratios) varied with phylogeny and typical habitat (to attune photosystem ratios in different trophic conditions and to optimise shade adaptation, photoprotection, and thylakoid architecture, particularly in freshwater environments), and changed with irradiance (as reaction and harvesting centres adjusted to modulate absorption and quantum yield). The complex, concomitant nature of the results also advocated an integrative approach in future investigations. Overall, these nuanced, diverse, and flexible photoresponsive traits will greatly contribute to the functional ecology of these organisms, addressing environmental heterogeneity and potentially shaping individual fitness, spatial and temporal distribution, prevalence, and ecosystem dynamics.
„If you can’t measure it, you can’t manage it.“ Dieser Slogan, der u. a. auf Peter Drucker, Henry Deming oder Robert Kaplan und David Norton zurückgehen soll, ist Ausdruck einer tiefen Überzeugung in die Notwendigkeit und den Nutzen des Performance Managements, einem Ansatz der auch die öffentliche Verwaltung erfasst und geprägt hat. Gleichzeitig impliziert er eine entscheidende Rolle von Performance Informationen. Die vorliegende Dissertation rückt das neuralgische Element Performance Information ins Zentrum des Forschungsinteresses, genauer die Verwendung von Kennzahlen.
Ausgangspunkt bildet die wissenschaftliche Beobachtung, dass Kennzahlen nicht immer und automatisch in der vom theoretischen Standpunkt aus erforderlichen und prognostizierten Art und Weise genutzt werden. Eine schlechte Implementierung des Managementansatzes oder Fehler im theoretischen Fundament sind mögliche Erklärungsansätze. Im Zuge der Analyse des Forschungsstandes ist offenkundig geworden, dass Erklärungen vor allem im organisationalen Setting und in Performance Management bezogenen Faktoren gesucht werden; ein Kennzeichen für eine eher technokratische und implementationsbezogene Perspektive auf die Verwendungsproblematik. Die aus neurowissenschaftlicher Sicht wichtige intrapersonale Ebene spielt eine ungeordnete Rolle.
In Anbetracht dessen ist auf der Grundlage neurowissenschaftlicher Erkenntnisse im Rahmen einer empirischen Untersuchung die Wirkung erfahrungsbezogener Variablen auf das Verwendungsverhalten untersucht worden. Dabei ist analysiert worden, wie Erfahrungen auf organisationaler Ebene entstehen und wie sie im Detail auf das Nutzungsverhalten wirken. Als Forschungsobjekt sind polizeiliche Führungskräfte herangezogen worden. Die Daten sind Ende 2016/Anfang 2017 online-basiert erhoben worden.
Im Ergebnis der Datenauswertung und Diskussion der Befunde sind folgende Erkenntnisse hervorzuheben:
(1) Erfahrungen beeinflussen die Verwendung von Performance Informationen. Die Art der Erfahrung mit Kennzahlen bildet dabei eine Mediatorvariable. Vor allem organisationale Faktoren, wie der Reifegrad des Performance Management Systems, wirken über den Faktor Erfahrung auf das Verwendungsverhalten.
(2) Erwähnenswert ist zudem, dass die Auseinandersetzung mit Kennzahlen sowohl den Erfahrungsschatz als auch die Nutzung von Kennzahlen positiv beeinflusst. Insgesamt haben sich die neurowissenschaftlich inspirierten Variablen als vielversprechende Erklärungsfaktoren herausgestellt.
(3) Des Weiteren hat die Arbeit bestehende Befunde abgesichert, v. a. die Wirkung des erwähnten Reifegrads. Allerdings sind auch Unterschiede aufgetreten. So büßt zum Beispiel der transformationale Führungsstil i. V. m. Art der Erfahrung seine positive Wirkung auf die Kennzahlennutzung ein.
(4) Interessant sind zudem die Ergebnisse des Labor- und Quasiexperiments. Erstmalig sind nicht zweckorientierte Verwendungsarten experimentell beobachtbar. Zudem sind neuro- und verhaltensökonomische Erklärungsansätze identifiziert und diskutiert worden, die eine Bereicherung des Forschungsdiskurses darstellen. Sie bieten eine neue Perspektive hinsichtlich des Verwendungsverhaltens und liefern Impulse für die weitere Forschung.
Für das New Public Management, in dessen Werkzeugkasten dieser Managementansatz eine Schlüsselrolle einnimmt, wiegen die Forschungsbefunde schwer. Ohne ein funktionierendes Performance Management kann das wichtige Reformziel „Wirkungsorientierung“ nicht erreicht werden. Das NPM läuft damit Gefahr, selbst Dysfunktionen zu entwickeln.
Insgesamt scheint es geboten, in der Auseinandersetzung mit Managementsystemen einen stärkeren Fokus auf intrapersonale Faktoren zu legen. Auch Verhaltensanomalien im Kontext von Management und deren Implikationen sollten näher untersucht werden. Es zeigt sich ferner, dass eine rein technokratische Sichtweise auf das Performance Management nicht zielführend ist. Folglich ist das Performance Management theoretisch wie konzeptionell fortzuentwickeln.
Die Forschungsarbeit liefert somit wichtige neue Erkenntnisse zur Verwendung von Performance Informationen und zum Verständnis von Performance Management. Vor allem erweitert sie den Forschungsdiskurs, da sie die Erklärungskraft intrapersonaler Faktoren aufgezeigt hat sowie methodisch mit dem Mixed-Method-Ansatz (Multimethod-Studie) und theoretisch mittels der Neuro- und Verhaltensökonomie neue Perspektiven hinsichtlich der Verwendungsproblematik eröffnet.
American occupying forces made the promotion of Jewish-Christian dialogue part of their plans for postwar German reconstruction. They sought to export American models of Jewish-Christian cooperation to Germany, while simultaneously validating and valorizing claims about the connection between democracy and tri-faith religious pluralism in the United States. The small size of the Jewish population in Germany meant that Jews did not set the terms of these discussions, and evidence shows that both German and American Jews expressed skepticism about participating in dialogue in the years immediately following the Holocaust. But opting out would have meant that discussions in Germany about the Judeo-Christian tradition that the American government advanced as the centerpiece of postwar democratic reconstruction would take place without a Jewish contribution. American Jewish leaders, present in Germany and in the US, therefore decided to opt in, not because they supported the project, but because it seemed far riskier to be left out.
This study aimed to compare the training load of a professional under-19 soccer team (U-19) to that of an elite adult team (EAT), from the same club, during the in-season period. Thirty-nine healthy soccer players were involved (EAT [n = 20]; U-19 [n = 19]) in the study which spanned four weeks. Training load (TL) was monitored as external TL, using a global positioning system (GPS), and internal TL, using a rating of perceived exertion (RPE). TL data were recorded after each training session. During soccer matches, players’ RPEs were recorded. The internal TL was quantified daily by means of the session rating of perceived exertion (session-RPE) using Borg’s 0–10 scale. For GPS data, the selected running speed intensities (over 0.5 s time intervals) were 12–15.9 km/h; 16–19.9 km/h; 20–24.9 km/h; >25 km/h (sprint). Distances covered between 16 and 19.9 km/h, > 20 km/h and >25 km/h were significantly higher in U-19 compared to EAT over the course of the study (p = 0.023, d = 0.243, small; p = 0.016, d = 0.298, small; and p = 0.001, d = 0.564, small, respectively). EAT players performed significantly fewer sprints per week compared to U-19 players (p = 0.002, d = 0.526, small). RPE was significantly higher in U-19 compared to EAT (p = 0.001, d = 0.188, trivial). The external and internal measures of TL were significantly higher in the U-19 group compared to the EAT soccer players. In conclusion, the results obtained show that the training load is greater in U19 compared to EAT.
In the context of the Fostering Women to STEM MOOCs (FOSTWOM) project, we present here the general ideas of a gender balance Toolkit, i.e. a collection of recommendations and resources for instructional designers, visual designers, and teaching staff to apply while designing and preparing storyboards for MOOCs and their visual components, so that future STEM online courses have a greater chance to be more inclusive and gender-balanced. Overall, The FOSTWOM project intends to use the inclusive potential of Massive Open Online Courses to propose STEM subjects free of stereotyping assumptions on gender abilities. Moreover, the consortium is interested in attracting girls and young women to science and technology careers, through accessible online content, which can include role models’ interviews, relevant real-world situations, and strong conceptual frameworks.
In the last five years, gravitational-wave astronomy has gone from a purerly theoretical field into a thriving experimental science. Several gravitational- wave signals, emitted by stellar-mass binary black holes and binary neutron stars, have been detected, and many more are expected in the future as consequence of the planned upgrades in the gravitational-wave detectors. The observation of the gravitational-wave signals from these systems, and the characterization of their sources, heavily relies on the precise models for the emitted gravitational waveforms. To take full advantage of the increased detector sensitivity, it is then necessary to also improve the accuracy of the gravitational-waveform models.
In this work, I present an updated version of the waveform models for spinning binary black holes within the effective-one-body formalism. This formalism is based on the notion that the solution to the relativistic two- body problem varies smoothly with the mass ratio of the binary system, from the equal-mass regime to the test-particle limit. For this reason, it provides an elegant method to combine, under a unique framework, the solution to the relativistic two-body problem in different regimes. The main two regimes that are combined under the effective-one-body formalism are the slow-motion, weak field limit (accessible through the post-Newtonian theory), and the extreme mass-ratio regime (described using the black-hole- perturbation theory). This formalism is nevertheless flexible enough to integrate information about the solution to the relativistic two-body problem obtained using other techniques, such as numerical relativity.
The novelty of the waveform models presented in this work is the inclusion of beyond-quadupolar terms in the waveforms emitted by spinning binary black holes. In fact, while the time variation of the source quadupole moment is the leading contribution to the waveforms emitted by binary black holes observable by LIGO and Virgo detectors, beyond-quadupolar terms can be important for binary systems with asymmetric masses, large total mass, or observed with large inclination angle with respect to the orbital angular momentum of the binary. For this purpose, I combine the approximate analytic expressions of these beyond-quadupolar terms, with their calculations from numerical relativity, to develop an accurate waveform model including inspiral, merger and ringdown for spinning binary black holes. I first construct this model in the simplified case of black holes with spins aligned with the orbital angular momentum of the binary, then I extend it to the case of generic spin orientations. Finally, I test the accuracy of both these models against a large number of waveforms obtained from numerical relativity. The waveform models I present in this work are the state of the art for spinning binary black holes, without restrictions in the allowed values for the masses and the spins of the system.
The measurement of the source properties of a binary system emitting gravitational waves requires to compute O(107 − 109) different waveforms. Since the waveform models mentioned before can require O(1 − 10)s to generate a single waveform, they can be difficult to use in data-analysis studies given the increasing number of sources observed by the LIGO and Virgo detectors. To overcome this obstacle, I use the reduced-order-modeling technique to develop a faster version of the waveform model for black holes with spins aligned to the orbital angular momentum of the binary. This version of the model is as accurate as the original and reduces the time for evaluating a waveform by two orders of magnitude.
The waveform models developed in this thesis have been used by the LIGO and Virgo collaborations in the inference of the source parameters of the gravitational-wave signals detected during the second observing run (O2), and first half of the third observing run (O3a) of LIGO and Virgo detectors. Here, I present a study on the source properties of the signals GW170729 and GW190412, for which I have been directly involved in the analysis. In addition, these models have been used by the LIGO and Virgo collaborations to perform tests on General Relativity employing the gravitational-wave signals detected during O3a, and to analyze the population of the observed binary black holes.
Die vorliegende Dissertation behandelt drei thematische Schwerpunkte. Im Ergebnisteil steht die chemische Synthese von sogenannten (1,7)-Naphthalenophanen im Vordergrund, die zur Substanzklasse von Cyclophanen gehören. Während zahlreiche Synthesemethoden Strategien zum Aufbau von Ringsystemen (wie z. B. von Naphthalenophanen) verfolgen, die Teil einer bereits existierenden aromatischen Struktur der Ausgangsverbindung sind, nutzen nur wenige Ansätze Reaktionen, die einen Ringschluss zum gewünschten Produkt erst im Zuge der Synthese etablieren. Eine Benzanellierung, die eine besondere Aufmerksamkeit im Arbeitskreis erfahren hat, ist die Dehydro-DIELS-ALDER-Reaktion (DDA-Reaktion). Im Rahmen dieser Arbeit konnte gezeigt werden, dass zwölf ausgewählte (1,7)-Naphthalenophane, die teilweise ringgespannt und makrozyklisch aufgebaut waren, mithilfe einer photochemischen Variante der DDA-Reaktion (PDDA-Reaktion) zugänglich gemacht werden können. Die Versuche, auf thermischem Wege (TDDA-Reaktion) (1,7)-Naphthalenophane herzustellen, misslangen. Die außergewöhnliche Reaktivität der Photoreaktanten konnte mithilfe quantenchemischer Berechnungen durch eine gefaltete Grundzustandsgeometrie erklärt werden. Darüber hinaus wurden Ringspannungen und strukturelle Spannungsindikatoren der relevanten Photoprodukte ermittelt und Trends in Abhängigkeit der Linkerlänge in den NMR-Spektren der Zielverbindungen ermittelt sowie diskutiert. Zudem zeigte eine Variation am Chromophor (Acyl-, Carbonsäure- und Carbonsäureester) der Photoreaktanten bei der Bestrahlung in Dichlormethan eine vergleichbare Photokinetik und -reaktivität. Der zweite Abschnitt dieser Dissertation ist dem Design und der Entwicklung zweier Photoreaktoren für UV-Anwendungen im kontinuierlichen Durchfluss gewidmet, da photochemische Transformationen bekanntermaßen in ihrer Skalierbarkeit limitiert sind. Im ersten Prototyp konnten mittels effizienter Parallelschaltung mit bis zu drei UV-Lampen (𝜆𝜆 = 254, 310 und 355 nm) Produktmaterialmengen von bis zu n = 188 mmol anhand eines ausgewählten Fallbeispiels erreicht werden. Im konstruktionstechnisch stark vereinfachten zweiten Photoreaktor wurden alle quarzhaltigen Elemente gegen günstigeres PLEXIGLAS® ersetzt. Das Resultat waren identische Raum-Zeit-Ausbeuten in Bezug auf das zuvor gewählte Synthesebeispiel. Demnach bietet die UV-Photochemie im kontinuierlichen Durchfluss Vorteile gegenüber der traditionellen Bestrahlung im Tauchreaktor. Hinsichtlich Reaktionszeit, Produktausbeuten und Lösemittelverbrauch ist sie synthetisch weit überlegen. Im letzten Abschnitt der Arbeit wurden diese Erkenntnisse genutzt, um biomedizinisch und pharmakologisch vielversprechende 1-Arylnaphthalen-Lignane mittels einer intramolekularen PDDA-Reaktion (IMPDDA-Reaktion) als Schlüsselschritt herzustellen. Hierzu wurden drei Konzepte erarbeitet und in der Totalsynthese von drei ausgewählten Zielstrukturen auf Basis des 1-Arylnaphthalengrundgerüsts realisiert.
In Search of Belonging
(2021)
More than 200,000 Jews left the Habsburg province of Galicia between 1881 and 1910. No longer living in the places of their childhood, they settled in urban centers, such as in New York’s Lower East Side. In this neighborhood, Galician Jews began to search for new relationships that linked the places they left and the ones where they arrived and settled. By looking at Galicia through the lens of autobiographical writings by former Jewish immigrants who became established residents of New York, this article emphasizes the role of regionalism in the context of transnational conceptions of a new American Jewish self-understanding. It argues that the key to analyzing the evolution of “eastern Europe” as a common place of origin for American Jewry is the constant dialogue between the places of origin and arrival. Specifically, philanthropic efforts during and after the First World War and the proliferation of tourism both enabled these settled immigrants to gradually replace regional notions, such as the idea of Galicia, with a mythical image of eastern Europe to create a sense of community as American Jews.
Spiele und spieltypische Elemente wie das Sammeln von Treuepunkten sind aus dem Alltag kaum wegzudenken. Zudem werden sie zunehmend in Unternehmen oder in Lernumgebungen eingesetzt. Allerdings ist die Methode Gamification bisher für den pädagogischen Kontext wenig klassifiziert und für Lehrende kaum zugänglich gemacht worden.
Daher zielt diese Bachelorarbeit darauf ab, eine systematische Strukturierung und Aufarbeitung von Gamification sowie innovative Ansätze für die Verwendung spieltypischer Elemente im Unterricht, konkret dem Mathematikunterricht, zu präsentieren. Dies kann eine Grundlage für andere Fachgebiete, aber auch andere Lehrformen bieten und so die Umsetzbarkeit von Gamification in eigenen Lehrveranstaltungen aufzeigen.
In der Arbeit wird begründet, weshalb und mithilfe welcher Elemente Gamification die Motivation und Leistungsbereitschaft der Lernenden langfristig erhöhen, die Sozial- und Personalkompetenzen fördern sowie die Lernenden zu mehr Aktivität anregen kann. Zudem wird Gamification explizit mit grundlegenden mathematikdidaktischen Prinzipien in Verbindung gesetzt und somit die Relevanz für den Mathematikunterricht hervorgehoben.
Anschließend werden die einzelnen Elemente von Gamification wie Punkte, Level, Abzeichen, Charaktere und Rahmengeschichte entlang einer eigens für den pädagogischen Kontext entwickelten Klassifikation „FUN“ (Feedback – User specific elements – Neutral elements) schematisch beschrieben, ihre Funktionen und Wirkung dargestellt sowie Einsatzmöglichkeiten im Unterricht aufgezeigt. Dies beinhaltet Ideen zu lernförderlichem Feedback, Differenzierungsmöglichkeiten und Unterrichtsrahmengestaltung, die in Lehrveranstaltungen aller Art umsetzbar sein können. Die Bachelorarbeit umfasst zudem ein spezifisches Beispiel, einen Unterrichtsentwurf einer gamifizierten Mathematikstunde inklusive des zugehörigen Arbeitsmaterials, anhand dessen die Verwendung von Gamification deutlich wird.
Gamification offeriert oftmals Vorteile gegenüber dem traditionellen Unterricht, muss jedoch wie jede Methode an den Inhalt und die Zielgruppe angepasst werden. Weiterführende Forschung könnte sich mit konkreten motivationalen Strukturen, personenspezifischen Unterschieden sowie mit mathematischen Inhalten wie dem Problemlösen oder dem Wechsel zwischen verschiedenen Darstellungen hinsichtlich gamifizierter Lehrformen beschäftigen.
Influenza A virus (IAV) is a pathogen responsible for severe seasonal epidemics threatening human and animal populations every year. During the viral assembly process in the infected cells, the plasma membrane (PM) has to bend in localized regions into a vesicle towards the extracellular side. Studies in cellular models have proposed that different viral proteins might be responsible for inducing membrane curvature in this context (including M1), but a clear consensus has not been reached. M1 is the most abundant protein in IAV particles. It plays an important role in virus assembly and budding at the PM. M1 is recruited to the host cell membrane where it associates with lipids and other viral proteins. However, the details of M1 interactions with the cellular PM, as well as M1-mediated membrane bending at the budozone, have not been clarified.
In this work, we used several experimental approaches to analyze M1-lipids and M1-M1 interactions. By performing SPR analysis, we quantified membrane association for full-length M1 and different genetically engineered M1 constructs (i.e., N- and C-terminally truncated constructs and a mutant of the polybasic region). This allowed us to obtain novel information on the protein regions mediating M1 binding to membranes. By using fluorescence microscopy, cryogenic transmission electron microscopy (cryo-TEM), and three-dimensional (3D) tomography (cryo-ET), we showed that M1 is indeed able to cause membrane deformation on vesicles containing negatively-charged lipids, in the absence of other viral components. Further, sFCS analysis proved that simple protein binding is not sufficient to induce membrane restructuring. Rather, it appears that stable M1-M1 interactions and multimer formation are required to alter the bilayer three-dimensional structure through the formation of a protein scaffold.
Finally, to mimic the budding mechanism in cells that arise by the lateral organization of the virus membrane components on lipid raft domains, we created vesicles with lipid domains. Our results showed that local binding of M1 to spatial confined acidic lipids within membrane domains of vesicles led to local M1 inward curvature.
Die vorliegende Arbeit befasst sich mit Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund. Dabei wurden vor allem der Bezug dieser Gründungen zu der Umwelt – dem Gründerökosystem –, in der sie stattfinden, sowie ihre gegenseitigen Wechselwirkungen untersucht. Der Forschungsgegenstand ist die Schnittstelle aus den Bereichen Gründungen, Migrantentum und Hochqualifikation. Der Fokus auf die sehr spezifische Zielgruppe Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund füllt eine wichtige Lücke in der bisherigen Forschung.
Methodisch gesehen bedient sich diese Arbeit eines theoretischen Bezugsrahmens. Dieser besteht aus der neoinstitutionalistischen Organisationstheorie (Meyer & Rowan 1977), dem Ressourcenabhängigkeitsansatz (Pfeffer & Salancik 1978) sowie dem sechs-dimensionalen Modell des Gründerökosystems (Isenberg 2011). Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund müssen ihre interne Ausgestaltung an die Anforderung der institutionellen Umwelt anpassen, um die notwendige Legitimität zu sichern. Dadurch können bei unterschiedlichen Gründungen isomorphe Organisationsstrukturen entstehen. Darüber hinaus können akademische Gründende mit Migrationshintergrund durch interorganisatorische Aktivitäten den Zugang zu nicht-substituierbaren Ressourcen für die Unternehmensgründung bzw. Geschäftsentwicklung ermöglichen bzw. erleichtern. Daher ist die Kombination beider Theorien und des Erklärungsansatzes ein effektives und passendes Analysetool für die vorliegende Forschungsarbeit und schafft sowohl auf Mikro- als auch auf Makroebene für die Leserinnen und Leser ein vollständiges Gesamtbild.
Die vorliegende Arbeit beinhaltet nicht nur Daten aus Sekundärquellen und bereits vorhandenen quantitativen Studien im deskriptiven Teil, sondern auch direkte Informationen durch eigene qualitative Untersuchung im empirischen Teil. Dafür wurden insgesamt 23 semistrukturierte Experteninterviews durchgeführt. Durch die Inhaltsanalyse nach Mayring (2014) wurden mehrere Kategorien herausgefiltert; dazu zählen bspw. umweltbezogene Einflussfaktoren auf Legitimität sowie nicht-substituierbare Ressourcen für Gründungen durch Akademikerinnen und Akademiker. Darüber hinaus wurden durch die Empirie einige Hypothesen für weitere quantitative Forschungen in der Zukunft aufgestellt und konkrete Handlungsempfehlungen für die Praxis gegeben.
Ab Mitte der 1820er Jahre erfuhr die Mikroskop- Technik eine stürmische Entwicklung. Dadurch, dass es gelang, nach und nach die optischen Fehler zu korrigieren, verbesserte sich die Auflösung bis zum Ende des Jahrhunderts um den Faktor 10 von 3 μm auf 0,3 μm. Um 1820 begann Christian Gottfried Ehrenberg mit mikroskopischen Untersuchungen. Er nutzte zunächst ein einfaches Nürnberger Mikroskop. 1832 erwarb er ein Mikroskop aus der Berliner Werkstatt von Pistor & Schiek, das er dann zeitlebens nutzte. Ein Vergleich der Leistungsfähigkeit seines Instruments mit der für seine Untersuchungen notwendigen Auflösung zeigt, dass es für seine Untersuchungen vollkommen genügte. Für seine Untersuchungsobjekte entwickelte er Präparationstechniken und Aufbewahrungsmethoden für die Dauerpräparate. Da er auch die mikroskopischen Abbildungen bis hin zu den Vorlagen für die Kupfertafeln selbst anfertigte, behielt er den gesamten Prozess von der Präparation bis zum Druck der Ergebnisse stets in der Hand.
Die herausragenden mechanischen Eigenschaften natürlicher anorganisch-organischer Kompositmaterialien wie Knochen oder Muschelschalen entspringen ihrer hierarchischen Struktur, die von der nano- bis hinauf zur makroskopischen Ebene reicht, und einer kontrollierten Verbindung entlang der Grenzflächen der anorganischen und organischen Komponenten.
Ausgehend von diesen Schlüsselprinzipien des biologischen Materialdesigns wurden in dieser Arbeit zwei Konzepte für die bioinspirierte Strukturbildung von Kompositen untersucht, die auf dem Verkleben von Nano- oder Mesokristallen mit funktionalisierten Poly(2-oxazolin)-Blockcopolymeren beruhen sowie deren Potenzial zur Herstellung bioinspirierter selbstorganisierter hierarchischer anorganisch-organischer Verbundstrukturen ohne äußere Kräfte beleuchtet. Die Konzepte unterschieden sich in den verwendeten anorganischen Partikeln und in der Art der Strukturbildung.
Über einen modularen Ansatz aus Polymersynthese und polymeranaloger Thiol-En-Funktionalisierung wurde erfolgreich eine Bibliothek von Poly(2-oxazolin)en mit unterschiedlichen Funktionalitäten erstellt. Die Blockcopolymere bestehen aus einem kurzen partikelaffinen "Klebeblock", der aus Thiol-En-funktionalisiertem Poly(2-(3-butenyl)-2-oxazolin) besteht, und einem langen wasserlöslichen, strukturbildenden Block, der aus thermoresponsivem und kristallisierbarem Poly(2-isopropyl-2-oxazolin) besteht und hierarchische Morphologien ausbildet. Verschiedene analytische Untersuchungen wie Turbidimetrie, DLS, DSC, SEM oder XRD machten das thermoresponsive bzw. das Kristallisationsverhalten der Blockcopolymere in Abhängigkeit vom eingeführten Klebeblock zugänglich. Es zeigte sich, dass diese Polymere ein komplexes temperatur- und pH-abhängiges Trübungsverhalten aufweisen. Hinsichtlich der Kristallisation änderte der Klebeblock nicht die nanoskopische Kristallstruktur; er beeinflusste jedoch die Kristallisationszeit, den Kristallisationsgrad und die hierarchische Morphologie. Dieses Ergebnis wurde auf das unterschiedliche Aggregationsverhalten der Polymere in Wasser zurückgeführt.
Für die Herstellung von Kompositen nutzte Konzept 1 mikrometergroße Kupferoxalat-Mesokristalle, die eine innere Nanostruktur aufweisen. Die Strukturbildung über den anorganischen Teil wurde durch das Verkleben und Anordnen dieser Partikel erstrebt. Konzept 1 ermöglichte homogene freistehende stabile Kompositfilme mit einem hohen anorganischen Anteil. Die Partikel-Polymer-Kombination vereinte jedoch ungünstige Eigenschaften in sich, d. h. ihre Längenskalen waren zu unterschiedlich, was die Selbstassemblierung der Partikel verhinderte. Aufgrund des geringen Aspektverhältnisses von Kupferoxalat blieb auch die gegenseitige Ausrichtung durch äußere Kräfte erfolglos. Im Ergebnis eignet sich das Kupferoxalat-Poly(2-oxazolin)-Modellsystem nicht für die Herstellung hierarchischer Kompositstrukturen.
Im Gegensatz dazu verwendet Konzept 2 scheibenförmige Laponit®-Nanopartikel und kristallisierbare Blockcopolymere zur Strukturbildung über die organische Komponente durch polymervermittelte Selbstassemblierung. Komplementäre Analysemethoden (Zeta-Potenzial, DLS, SEM, XRD, DSC, TEM) zeigten sowohl eine kontrollierte Wechselwirkung zwischen den Komponenten in wässriger Umgebung als auch eine kontrollierte Strukturbildung, die in selbstassemblierten Nanokompositen resultiert, deren Struktur sich über mehrere Längenskalen erstreckt. Es wurde gezeigt, dass die negativ geladenen Klebeblöcke spezifisch und selektiv an den positiv geladenen Rändern der Laponit®-Partikel binden und so Polymer-Laponit®-Nanohybridpartikel entstehen, die als Grundbausteine für die Kompositbildung dienen. Die Hybridpartikel sind bei Raumtemperatur elektrosterisch stabilisiert - sterisch durch ihre langen, mit Wasser wechselwirkenden Poly(2-isopropyl-2-oxazolin)-Blöcke und elektrostatisch über die negativ geladenen Laponit®-Flächen. Im Ergebnis ließ sich Konzept 2 und damit die Strukturbildung über die organische Komponente erfolgreich umsetzten. Das Laponit®-Poly(2-oxazolin)-Modellsystem eröffnete den Weg zu selbstassemblierten geschichteten quasi-hierarchischen Nanokompositstrukturen mit hohem anorganischen Anteil. Abhängig von der frei verfügbaren Polymerkonzentration bei der Kompositbildung entstanden zwei unterschiedliche Komposit-Typen. Darüber hinaus entwarf die Arbeit einen Erklärungsansatz für den polymervermittelten Bildungsprozess der Komposit-Strukturen.
Insgesamt legt diese Arbeit Struktur-Prozess-Eigenschafts-Beziehungen offen, um selbstassemblierte bioinspirierte Kompositstrukturen zu bilden und liefert neue Einsichten zu einer geeigneten Kombination an Komponenten und Herstellungsbedingungen, die eine kontrollierte selbstassemblierte Strukturbildung mithilfe funktionalisierter Poly(2-oxazolin)-Blockcopolymere erlauben.
Fluids in the Earth's crust can move by creating and flowing through fractures, in a process called `hydraulic fracturing’. The tip-line of such fluid-filled fractures grows at locations where stress is larger than the strength of the rock. Where the tip stress vanishes, the fracture closes and the fluid-front retreats. If stress gradients exist on the fracture's walls, induced by fluid/rock density contrasts or topographic stresses, this results in an asymmetric shape and growth of the fracture, allowing for the contained batch of fluid to propagate through the crust.
The state-of-the-art analytical and numerical methods to simulate fluid-filled fracture propagation are two-dimensional (2D). In this work I extend these to three dimensions (3D). In my analytical method, I approximate the propagating 3D fracture as a penny-shaped crack that is influenced by both an internal pressure and stress gradients. In addition, I develop a numerical method to model propagation where curved fractures can be simulated as a mesh of triangular dislocations, with the displacement of faces computed using the displacement discontinuity method. I devise a rapid technique to approximate stress intensity and use this to calculate the advance of the tip-line. My 3D models can be applied to arbitrary stresses, topographic and crack shapes, whilst retaining short computation times.
I cross-validate my analytical and numerical methods and apply them to various natural and man-made settings, to gain additional insights into the movements of hydraulic fractures such as magmatic dikes and fluid injections in rock. In particular, I calculate the `volumetric tipping point’, which once exceeded allows a fluid-filled fracture to propagate in a `self-sustaining’ manner. I discuss implications this has for hydro-fracturing in industrial operations. I also present two studies combining physical models that define fluid-filled fracture trajectories and Bayesian statistical techniques. In these studies I show that the stress history of the volcanic edifice defines the location of eruptive vents at volcanoes. Retrieval of the ratio between topographic to remote stresses allows for forecasting of probable future vent locations. Finally, I address the mechanics of 3D propagating dykes and sills in volcanic regions. I focus on Sierra Negra volcano in the Gal\'apagos islands, where in 2018, a large sill propagated with an extremely curved trajectory. Using a 3D analysis, I find that shallow horizontal intrusions are highly sensitive to topographic and buoyancy stress gradients, as well as the effects of the free surface.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
High-salt (HS) diets have recently been linked to oxidative stress in the brain, a fact that may be a precursor to behavioral changes, such as those involving anxiety-like behavior. However, to the best of our knowledge, no study has evaluated the amygdala redox status after consuming a HS diet in the pre- or postweaning periods. This study aimed to evaluate the amygdala redox status and anxiety-like behaviors in adulthood, after inclusion of HS diet in two periods: preconception, gestation, and lactation (preweaning); and only after weaning (postweaning). Initially, 18 females and 9 male Wistar rats received a standard (n = 9 females and 4 males) or a HS diet (n = 9 females and 5 males) for 120 days. After mating, females continued to receive the aforementioned diets during gestation and lactation. Weaning occurred at 21-day-old Wistar rats and the male offspring were subdivided: control-control (C-C)—offspring of standard diet fed dams who received a standard diet after weaning (n = 9–11), control-HS (C-HS)—offspring of standard diet fed dams who received a HS diet after weaning (n = 9–11), HS-C—offspring of HS diet fed dams who received a standard diet after weaning (n = 9–11), and HS-HS—offspring of HS diet fed dams who received a HS diet after weaning (n = 9–11). At adulthood, the male offspring performed the elevated plus maze and open field tests. At 152-day-old Wistar rats, the offspring were euthanized and the amygdala was removed for redox state analysis. The HS-HS group showed higher locomotion and rearing frequency in the open field test. These results indicate that this group developed hyperactivity. The C-HS group had a higher ratio of entries and time spent in the open arms of the elevated plus maze test in addition to a higher head-dipping frequency. These results suggest less anxiety-like behaviors. In the analysis of the redox state, less activity of antioxidant enzymes and higher levels of the thiobarbituric acid reactive substances (TBARS) in the amygdala were shown in the amygdala of animals that received a high-salt diet regardless of the period (pre- or postweaning). In conclusion, the high-salt diet promoted hyperactivity when administered in the pre- and postweaning periods. In animals that received only in the postweaning period, the addition of salt induced a reduction in anxiety-like behaviors. Also, regardless of the period, salt provided amygdala oxidative stress, which may be linked to the observed behaviors.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
The regulation of oxygen and blood supply during isometric muscle actions is still unclear. Recently, two behavioral types of oxygen saturation (SvO2) and relative hemoglobin amount (rHb) in venous microvessels were described during a fatiguing holding isometric muscle action (HIMA) (type I: nearly parallel behavior of SvO2 and rHb; type II: partly inverse behavior). The study aimed to ascertain an explanation of these two regulative behaviors. Twelve subjects performed one fatiguing HIMA trial with each arm by weight holding at 60% of the maximal voluntary isometric contraction (MVIC) in a 90° elbow flexion. Six subjects additionally executed one fatiguing PIMA trial by pulling on an immovable resistance with 60% of the MVIC with each side and same position. Both regulative types mentioned were found during HIMA (I: n = 7, II: n = 17) and PIMA (I: n = 3, II: n = 9). During the fatiguing measurements, rHb decreased initially and started to increase in type II at an average SvO2-level of 58.75 ± 2.14%. In type I, SvO2 never reached that specific value during loading. This might indicate the existence of a threshold around 59% which seems to trigger the increase in rHb and could explain the two behavioral types. An approach is discussed to meet the apparent incompatibility of an increased capillary blood filling (rHb) despite high intramuscular pressures which were found by other research groups during isometric muscle actions.
Forschendes Lernen und die digitale Transformation sind zwei der wichtigsten Einflüsse auf die Entwicklung der Hochschuldidaktik im deutschprachigen Raum. Während das forschende Lernen als normative Theorie das sollen beschreibt, geben die digitalen Werkzeuge, alte wie neue, das können in vielen Bereichen vor.
In der vorliegenden Arbeit wird ein Prozessmodell aufgestellt, was den Versuch unternimmt, das forschende Lernen hinsichtlich interaktiver, gruppenbasierter Prozesse zu systematisieren. Basierend auf dem entwickelten Modell wurde ein Softwareprototyp implementiert, der den gesamten Forschungsprozess begleiten kann. Dabei werden Gruppenformation, Feedback- und Reflexionsprozesse und das Peer Assessment mit Bildungstechnologien unterstützt. Die Entwicklungen wurden in einem qualitativen Experiment eingesetzt, um Systemwissen über die Möglichkeiten und Grenzen der digitalen Unterstützung von forschendem Lernen zu gewinnen.
Universitat Politècnica de València’s Experience with EDX MOOC Initiatives During the Covid Lockdown
(2021)
In March 2020, when massive lockdowns started to be enforced around the world to contain the spread of the COVID-19 pandemic, edX launched two initiatives to help students around the world providing free certificates for its courses, RAP, for member institutions and OCE, for any accredited academic institution. In this paper we analyze how Universitat Poltècnica de València contributed with its courses to both initiatives, providing almost 14,000 free certificate codes in total, and how UPV used the RAP initiative as a customer, describing the mechanism used to distribute more than 22,000 codes for free certificates to more than 7,000 UPV community members, what led to the achievement of more than 5,000 free certificates. We also comment the results of a post initiative survey answered by 1,612 UPV members about 3,241 edX courses, in which they communicated a satisfaction of 4,69 over 5 with the initiative.
Ground-based astronomy is set to employ next-generation telescopes with apertures larger than 25 m in diameter before this decade is out. Such giant telescopes observe their targets through a larger patch of turbulent atmosphere, demanding that most of the instruments behind them must also grow larger to make full use of the collected stellar flux. This linear scaling in size greatly complicates the design of astronomical instrumentation, inflating their cost quadratically. Adaptive optics (AO) is one approach to circumvent this scaling law, but it can only be done to an extent before the cost of the corrective system itself overwhelms that of the instrument or even that of the telescope. One promising technique for miniaturizing the instruments and thus driving down their cost is to replace some, or all, of the free space bulk optics in the optical train with integrated photonic components.
Photonic devices, however, do their work primarily in single-mode waveguides, and the atmospherically-distorted starlight must first be efficiently coupled into them if they are to outperform their bulk optic counterparts. This is doable by two means: AO systems can again help control the angular size and motion of seeing disks to the point where they will couple efficiently into astrophotonic components, but this is only feasible for the brightest of objects and over limited fields of view. Alternatively, tapered fiber devices known as photonic lanterns — with their ability to convert multimode into single-mode optical fields — can be used to feed speckle patterns into single-mode integrated optics. They, nonetheless, must conserve the degrees of freedom, and the number of output waveguides will quickly grow out of control for uncorrected large telescopes. An AO-assisted photonic lantern fed by a partially corrected wavefront presents a compromise that can have a manageable size if the trade-off between the two methods is chosen carefully. This requires end-to-end simulations that take into account all the subsystems upstream of the astrophotonic instrument, i.e., the atmospheric layers, the telescope, the AO system, and the photonic lantern, before a decision can be made on sizing the multiplexed integrated instrument.
The numerical models that simulate atmospheric turbulence and AO correction are presented in this work. The physics and models for optical fibers, arrays of waveguides, and photonic lanterns are also provided. The models are on their own useful in understanding the behavior of the individual subsystems involved and are also used together to compute the optimum sizing of photonic lanterns for feeding astrophotonic instruments. Additionally, since photonic lanterns are a relatively new concept, two novel applications are discussed for them later in this thesis: the use of mode-selective photonic lanterns (MSPLs) to reduce the multiplicity of multiplexed integrated instruments and the combination of photonic lanterns with discrete beam combiners (DBCs) to retrieve the modal content in an optical waveguide.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
Large rock slope failures play a pivotal role in long-term landscape evolution and are a major concern in land use planning and hazard aspects. While the failure phase and the time immediately prior to failure are increasingly well studied, the nature of the preparation phase remains enigmatic. This knowledge gap is due, to a large degree, to difficulties associated with instrumenting high mountain terrain and the local nature of classic monitoring methods, which does not allow integral observation of large rock volumes. Here, we analyse data from a small network of up to seven seismic sensors installed during July-October 2018 (with 43 days of data loss) at the summit of the Hochvogel, a 2592 m high Alpine peak. We develop proxy time series indicative of cyclic and progressive changes of the summit. Modal analysis, horizontal-to-vertical spectral ratio data and end-member modelling analysis reveal diurnal cycles of increasing and decreasing coupling stiffness of a 260,000 m(3) large, instable rock volume, due to thermal forcing. Relative seismic wave velocity changes also indicate diurnal accumulation and release of stress within the rock mass. At longer time scales, there is a systematic superimposed pattern of stress increased over multiple days and episodic stress release within a few days, expressed in an increased emission of short seismic pulses indicative of rock cracking. Our data provide essential first order information on the development of large-scale slope instabilities towards catastrophic failure. (c) 2020 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd
Eine Zunahme der allgemeinen Temperatur auf Grund des Klimawandels und die damit einhergehende Zunahme von Hitzewellen führten dazu, dass das Landesamt für Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) einen Leitfaden für den Schutz der positiven Klimafunktion urbaner Böden herausgab. Darauf aufbauend wurde auf regionaler Ebene für die Stadt Düsseldorf die Kühlleistung der urbanen Böden quantifiziert, um besonders schutzwürdige Bereiche zu identifizieren. Im Rahmen des Projektes ExTrass sollte nun die Kühlleistung urbaner Böden innerhalb Remscheids quantifiziert werden, jedoch auf Basis von frei zugänglichen Daten. Eine solche Datengrundlage schließt eine Modellierung des Bodenwasserhaushaltes, welches die Grundlage der Quantifizierung in Düsseldorf war, für Remscheid aus. Jedoch bietet der vorgestellte Ansatz die Möglichkeit, eine solche Untersuchung auch in anderen Gemeinden innerhalb Deutschlands mit relativ wenig Aufwand durchzuführen.
Die Kühlleistung der Böden wurde über die nutzbare Feldkapazität abgeschätzt, welche das Wasserspeichervolumen der obersten durchwurzelten Bodenzone angibt. Es ist der Bodenwasserspeicher, der Wasser für die Evapotranspiration zur Verfügung stellt und damit maßgeblich die Kühlleistung eines Bodens definiert, d.h. durch direkte Evaporation des Bodenwassers sowie durch die Transpiration von Wasser durch Pflanzen. In die Erstellung der Karte sind eingegangen: (a) die Bodenkarte Nordrhein-Westfalens (BK50), um die nutzbare Feldkapazität (nFK) je Fläche zu bestimmen; (b) der Landnutzungsdatensatz UrbanAtlas 2012, in Verbindung mit einer Literaturrecherche, um den Einfluss der Landnutzung auf die Werte der nFK, insbesondere im Hinblick auf Versiegelung und Verdichtung herzuleiten; und (c) OpenStreetMap (OSM), um den Anteil der versiegelten Flächen genauer zu bestimmen, als dies auf Basis des UrbanAtlas möglich gewesen wäre.
Es hat sich gezeigt, dass dieser Ansatz geeignet ist, um die räumliche Verteilung der potenziellen Bodenkühlfunktion innerhalb einer Stadt zu untersuchen. Es ist zu beachten, dass der Einfluss des Grundwassers in Remscheid nicht berücksichtigt werden konnte. Denn es ist damit zu rechnen, dass die Grundwasserverhältnisse aufgrund der geologischen und topographischen Situation in Remscheid kleinräumig Variationen unterliegen und es somit
keinen durchgängigen und kartierten Aquifer gibt.
Kleingartenanlagen, Parks und Friedhöhe im innerstädtischen Bereich und allgemein die Landnutzungsklassen Wald und Grünland wurden als Flächen mit einem besonders hohem potenziellen Bodenkühlpotenzial identifiziert. Solche Flächen sind besonders schützenswert. Die Analyse der Speicherfüllstände der oberen Bodenzone, basierend auf der erstellten Karte der potenziellen Bodenkühlfunktion und der klimatischen Wasserbilanz, ergab, dass besonders innerstädtische Flächen, die einen kleinen Bodenwasserspeicher haben, in einem trockenen Jahr bereits früh im Sommer ihre Kühlfunktion verlieren und bei Hitzewellen somit eine verringerte positive Klimafunktion haben. Gestützt wird diese Aussage durch eine Auswertung des normalisierten differenzierten Vegetationsindex (NDVI), der genutzt wurde, um die Veränderung der Pflanzenvitalität vor und nach einer Hitzeperiode im Juni/Juli 2018 zu untersuchen.
Messungen mit Meteobikes, einer Vorrichtung, die dazu geeignet ist, während einer Radfahrt kontinuierlich die Temperatur zu messen, stützen die Erkenntnis, dass innerstädtische Grünflächen wie Parks eine positive Wirkung auf das urbane Mikroklima haben. Weiterhin zeigen diese Messungen, dass die Topographie innerhalb des Untersuchungsgebietes die Aufheizung einzelner Flächen und die Temperaturverteilung vermutlich mitbestimmt. Die hier vorgestellte Karte der potenziellen Kühlfunktion für Remscheid sollte als Ergänzung in die Klimafunktionskarte für Remscheid eingehen und den bestehenden Layer „flächenhafte Klimafunktion“, der nur die Landnutzung berücksichtigt, ersetzen.
Foreign Entanglements
(2021)
The field of American Jewish studies has recently trained its focus on the transnational dimensions of its subject, reflecting in more sustained ways than before about the theories and methods of this approach. Yet, much of the insight to be gained from seeing American Jewry as constitutively entangled in many ways with other Jewries has not yet been realized. Transnational American Jewish studies are still in their infancy.
This issue of PaRDeS presents current research on the multiple entanglements of American with Central European, especially German-speaking Jewries in the 19th and 20th centuries. The articles reflect the wide range of topics that can benefit from a transnational understanding of the American Jewish experience as shaped by its foreign entanglements.