Extern
Refine
Has Fulltext
- yes (1303) (remove)
Year of publication
Document Type
- Article (534)
- Postprint (204)
- Conference Proceeding (136)
- Review (128)
- Doctoral Thesis (113)
- Monograph/Edited Volume (87)
- Working Paper (42)
- Part of Periodical (19)
- Part of a Book (15)
- Preprint (7)
Language
- German (846)
- English (429)
- Russian (14)
- French (13)
- Portuguese (1)
Keywords
- Philosophie (18)
- philosophy (18)
- Lehrkräftebildung (16)
- Reflexion (13)
- Reflexionskompetenz (9)
- Germany (8)
- United States (8)
- Anthropologie (7)
- Feedback (7)
- USA (7)
Institute
- Extern (1303) (remove)
We develop a model of optimal carbon taxation and redistribution taking into account horizontal equity concerns by considering heterogeneous energy efficiencies. By deriving first- and second-best rules for policy instruments including carbon taxes, transfers and energy subsidies, we then investigate analytically how horizontal equity is considered in the social welfare maximizing tax structure. We calibrate the model to German household data and a 30 percent emission reduction goal. Our results show that energy-intensive households should receive more redistributive resources than energy-efficient households if and only if social inequality aversion is sufficiently high. We further find that redistribution of carbon tax revenue via household-specific transfers is the first-best policy. Equal per-capita transfers do not suffer from informational problems, but increase mitigation costs by around 15 percent compared to the first- best for unity inequality aversion. Adding renewable energy subsidies or non-linear energy subsidies, reduces mitigation costs further without relying on observability of households’ energy efficiency.
We investigate how the economic consequences of the pandemic, and of the government-mandated measures to contain its spread, affect the self-employed – particularly women – in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are 35% more likely to experience income losses than their male counterparts. Conversely, we do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, i.e. the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
In this paper, we study the effect of exogenous global crop price changes on migration from agricultural and non-agricultural households in Sub-Saharan Africa. We show that, similar to the effect of positive local weather shocks, the effect of a locally-relevant global crop price increase on household out-migration depends on the initial household wealth. Higher international producer prices relax the budget constraint of poor agricultural households and facilitate migration. The order of magnitude of a standardized price effect is approx. one third of the standardized effect of a local weather shock. Unlike positive weather shocks, which mostly facilitate internal rural-urban migration, positive income shocks through rising producer prices only increase migration to neighboring African countries, likely due to the simultaneous decrease in real income in nearby urban areas. Finally, we show that while higher producer prices induce conflict, conflict does not play a role for the household decision to send a member as a labor migrant.
The paper investigates the question of sustainability of capacity building initiatives by reporting about the multiplication training in the frame of DIES NMT Programme on quality assurance in Uganda and how it could make use of the social capital within the existing quality assurance network to sustain and address challenges during its implementation. The purpose of the article is to explore the nature of networking (social and institutional) which was established by the Ugandan Universities Quality Assurance Forum (UUQAF) and share the strategies used in this training experience for future sustainable capacity building training initiatives in emerging economies. The paper employed a qualitative research method to describe and analyse the training framework based on primary and secondary documents.
Higher education institutions in Guinea face many challenges, including reporting responsibilities, globalisation, and massification. Institutional evaluations of higher education and research institutions in 2013 could not initiate the implementation of change processes within the institutions. Recently, however, various initiatives have been started to change this situation with the purpose to sensitise and raise awareness and capabilities for quality assurance structures in Guinean HEIs. So far, the emphasis has been put on quality enhancement in higher education, especially on teaching evaluation, curriculum development, as well as on establishing quality assurance structures. This article gives an overview of the state of play and takes stock of the activities that have been initiated to set up quality assurance mechanisms in higher education and research institutions, and presents perspectives for further development of the quality approach in Guinea. The project ‘Quality Assurance Multiplication 2017-2018’ serves as an example to describe approaches and activities in setting up stable quality assurance structures, and to strengthen and raise awareness for a ‘quality culture’.
Les établissements d’enseignement supérieur en Guinée sont confrontés à de nombreux défis, notamment les responsabilités en matière de rapports, la mondialisation et la massification. Les évaluations institutionnelles des établissements d’enseignement supérieur (EES) et de recherche en 2013 n’ont pas permis d’initier la mise en oeuvre des processus de changement au sein des établissements. Récemment, diverses initiatives ont été lancées pour changer cette situation dans le but de sensibiliser et de renforcer la prise de conscience et les capacités des structures d’assurance qualité dans les EES guinéennes. Jusqu’à présent, l’accent a été mis sur l’amélioration de la qualité dans l’enseignement supérieur, en particulier sur l’évaluation de l’enseignement, l’élaboration des programmes d’études, ainsi que sur la mise en place de structures d’assurance qualité. Cet article donne un aperçu de l’état des lieux et fait le point sur les activités qui ont été lancées pour mettre en place des mécanismes d’assurance qualité dans les établissements d’enseignement supérieur et de recherche, et présente les perspectives de développement de l›approche qualité en Guinée. Le projet «Multiplication de l’assurance qualité 2017 – 2018» sert d’exemple pour décrire les approches et les activités de mise en place de structures stables d’assurance qualité, et pour renforcer et sensibiliser à une «culture de la qualité».
Whilst providing a framework for learning and scientific emancipation, a proposal writing training is confronted with various organisational and didactic challenges, which influence the achievement of the set training objectives. Based on observations made during the workshops for proposal writing organised in Kinshasa, Democratic Republic of Congo, as part of the NMT Programme, the article raises two main questions: (a) How could these challenges be overcome and successfully addressed in the training? (b) What is the level of learning outcomes of the participants at the end of the training? The article shows that the success of the training lays in the relevance of the employed training approaches. The use of a participatory approach encouraged constructive exchanges between participants, trainers, and experts, and enabled all participants to finalise coherent projects to apply for national and international funding.
Mise en oeuvre de l’atelier d’écriture des projets de recherche en République démocratique du Congo
(2020)
Tout en offrant un cadre d’apprentissage et d’émancipation scientifique, l’atelier d’écriture des projets oppose différents défis, d’ordre organisationnel et didactique, dont les approches influent sur son déploiement et sur l’atteinte des objectifs assignés. Comment s’y prendre pour contourner ces défis et accomplir avec succès l’implémentation de la formation proposée ? Quelle en est l’incidence sur la qualité des acquisitions des participants au terme de la formation ? Le présent article répond à ces questions, partant des observations faites lors des formations organisées à Kinshasa (République démocratique du Congo) dans le cadre du programme National Multiplication Training (NMT). Il en ressort que le succès d’un atelier réside dans la pertinence des approches mises en oeuvre. Le recours à l’approche participative a favorisé les échanges constructifs entre participants, formateurs et experts, et a permis à tous les participants de finaliser des projets cohérents, pour postuler aux financements nationaux et internationaux.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
This article collected the results of a qualitative study focused on Colombian Higher Education Institutions’ representatives partaking in the training ‘Internationalisation for Peacebuilding 2018’. The selected Higher Education Institutions and representatives were all located in regions acutely affected by the Colombian armed conflict, now experiencing multifaceted challenges and opportunities in a post-conflict scenario. Interviews with participants of the training were conducted to analyse the skills acquired and to identify possible improvements brought about by the training at the institutions. The article further identifies specific needs of the institutions, to be taken into account for future courses on internationalisation for higher education institutions.
During the National Multiplication Training in Kenya in 2018, participants raised concerns about attrition, completion rates and quality of PhD programmes in Kenya’s public universities. This led the authors of this article to further examine the question of PhD completion rates. Available data underlined that PhD students across various disciplines in Kenya’s public universities take unnecessarily long to complete their studies due to a myriad of factors that are related to their supervisors, university guidelines for post-graduate studies, or the students themselves. This article examines inertia areas along the PhD training pathway at three public universities in Kenya and provides suggestions on structural and operational changes universities must make to shorten completion periods.
Deans at Institutions of Higher Education are seldom recipients of effective or specific professional management training, institutional mentorship, and coaching despite an increasing demand on them to play a more dynamic leadership role in the face of ever-changing local and global challenges. To address this deficiency, the inaugural Malaysian Chapter of the International Deans’ Course (MyIDC) was held in three parts over 2019 and 2020. In this paper, findings related to feedback on the programme are presented and discussed. Responses from the participants from two sets of surveys, and written feedback provided by two IDC international trainers involved in MyIDC were analysed. These reveal potential areas of improvement for the forthcoming MyIDC programme, such as in terms of planning and organisation, duration, content, and delivery. The article explores the lessons learnt from the MyIDC 2019/2020 training programme and discusses the improvements that can be made arising from the feedback received.
The higher education structure in Malaysia has experienced significant changes since the implementation of the Private Higher Educational Institutions Act of 1996. The unprecedented expansion of the higher education sector and the increasing autonomy conferred to universities have created a huge demand for competent university leadership that supports the development of higher education in Malaysia. This article discusses the very first national multiplication training in Malaysia in 2014 and analyses such out-comes as the identification of good practices for future initiatives and applications in university leadership training.
Alles auf (Studien-)Anfang? Faktoren für den Studienerfolg in der Eingangsphase und zur Studienmitte
(2020)
Die hohen Abbruchquoten, insbesondere in der Studieneingangsphase, haben die Hochschulen in Deutschland veranlasst, eine Vielzahl von Maßnahmen zu ergreifen, über deren Wirkungen bisher allerdings wenig bekannt ist. Im vorliegenden Beitrag werden Befunde eines Forschungsprojekts speziell zur Studieneingangsphase sowie ergänzend zur Studienmitte vorgestellt, dessen Ziel es war, Bedingungen eines erfolgreichen Studieneinstiegs zu identifizieren und Empfehlungen für eine Optimierung des Studieneingangs abzuleiten. Das Forschungsdesign umfasste neben qualitativen Studien vor allem eine quantitative Längsschnittbefragung an fünf Universitäten (Potsdam, Mainz, Magdeburg, Kiel und Greifswald). Im Ergebnis der Analysen konnte die forschungsleitende Hypothese, dass Maßnahmen zum Studieneingang vor allem dann zur Erhöhung des Studienerfolgs einen Beitrag leisten, wenn sie zur akademischen und sozialen Integration in die Hochschule beitragen, bestätigt werden. Bedeutsam für den Studienerfolg sind demnach insbesondere solche Faktoren wie die Identifikation mit dem Studienfach, die Selbstwirksamkeit, die berufliche bzw. erfolgsorientierte Lernmotivation und die akademische Integration. Daneben konnte ein positiver Einfluss des sozialen Klimas sowie des Forschungs- und Praxisbezugs auf die Studienzufriedenheit nachgewiesen werden. Weiterführende Analysen zur Studienmitte verdeutlichen zudem, dass für die beiden Studienphasen (Eingang und Studienmitte) gleiche Faktoren bei zum Teil unterschiedlicher Gewichtung eine Rolle spielen. So ist die soziale Integration ein wesentlicher Prädiktor in beiden Phasen – in der Eingangsphase eher in die Studierendenschaft und im weiteren Studienverlauf (Studienmitte) eher in die akademische Gemeinschaft (in Form von Lehrenden). Insofern muss die Eingangsfrage wie folgt beantwortet werden: Ja, alles auf Anfang, aber dann mit den Bemühungen, soziale und akademische Integration aller Studierenden voll und ganz zu gewährleisten. Zudem machen die Befunde auf die bisher offenbar unterschätzte Rolle von Verwertungsmotiven aufmerksam.
Wolfenstein: The New Order
(2020)
Silent Hill 2
(2020)
Papers, Please
(2020)
Mass Effect
(2020)
Grand Theft Auto V
(2020)
Brothers: A Tale of Two Sons
(2020)
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
Der Beitrag befasst sich mit der Möglichkeit, Computerspiele aufgrund der bildlichen Stilmittel parallel zu Entwicklungen der Kunstgeschichte zu untersuchen. Hierzu wird auf die Stilanalyse des Schweizer Kunstwissenschaftlers Heinrich Wölfflin zurückgegriffen, der den Wandel der realistischen Malerei von der Renaissance zum Barock am Übergang von ‚flachen‘ zu ‚tiefen‘ Darstellungen festmacht. In einem zeitlich wesentlich kürzeren Abstand lässt sich die gleiche Veränderung am Übergang früher realistischer Computerspiele vom Anfang der 1990er Jahre bis zu den 2000er Jahren feststellen. Damit zeigt sich sowohl die Relevanz der kunstgeschichtlichen Auseinandersetzung mit Computerspielen, wie sich auch eine neue Perspektive auf die Frage nach digitalen Spielen ‚als Kunst‘ eröffnet.
Der Beitrag beschäftigt sich mit dem Potential von Computerspielen für den Politikunterricht. Im Mittelpunkt steht die Frage, ob sich die Politiksimulation DEMOCRACY 3 für einen Einsatz in der gymnasialen Oberstufe eignet. Es wird herausgearbeitet, dass sich die Spieler von DEMOCRACY 3 mit den Auswirkungen politischer Entscheidungen auf die ökonomische, soziale und kulturelle Lage eines westlichen Landes auseinandersetzen, während die Komplexität politischer Aushandlungs- und Entscheidungsprozesse in demokratischen Regierungssystemen nicht erfahrbar wird. Die hier festgestellte Unvollständigkeit der politischen Simulation wird für die pädagogische Praxis aber nicht nur als Problem, sondern vor allem auch als didaktische Chance gesehen. Denn es sind gerade die Leer- und Schwachstellen der Simulation des Spieles, die besondere Anknüpfungsmöglichkeiten für den Unterricht bieten, da sie eine kritische Analyse der Simulation bzw. einen Vergleich mit der Realität politischer Prozesse und damit eine tiefe Auseinandersetzung mit Politik im Allgemeinen herausfordern. Schließlich werden drei konkrete Ideen für die schulische Praxis vorgestellt.
Die Geschichte lebt!
(2020)
In „Die Geschichte lebt!“ skizziert der Autor fünf wichtige Prämissen zur erfolgreichen Entwicklung didaktischer Spielformen. Auf Basis seiner eigenen Tätigkeit als Game Designer exemplifiziert er diese anhand eigener digitaler und analoger Spielformen und erklärt anschaulich das eigene Vorgehen bei der Entwicklung von Serious Games und Lernspielen.
Nach einer Einführung in die Geschichte der Strategiespiele und im Speziellen von 4X-Spielen wird das Phänomen der „Hands-off-Games“ erläutert. Im Anschluss wird ein Vorschlag unterbreitet, wie 4X-Geschichtsspiele im Unterricht eingesetzt werden können. Dabei soll ein 4X-Strategiespiel zu einem historischen Thema entworfen werden. Die Modellierung erfolgt in drei Arbeitsschritten: Themenfindung, Modellfindung, Parametrierung. In den Entwurf des Modells fließen viele Überlegungen ein, die zentrale Fragen der Gemeinschaftskunde betreffen.
An der BTU finden freiwillige Kurse zur Studienvorbereitung, im Rahmen des BMBF geförderten Projektes „Blended Learning Anfangshürden erkennen zur Unterstützung der fachspezifischen Studienvorbereitung und des Lernerfolges im ersten Studienjahr“, statt. Die Kluft zwischen notwendigen und tatsächlich vorhandenen Fähigkeiten ist in der Mathematik bei vielen angehenden Studierenden groß und die zur Verfügung stehende Zeit oft zu gering. Hier setzt dieses Konzept an. Neben Präsenzveranstaltungen enthält der Kurs unterstützte Selbstlernphasen. Diese werden durch einen virtuellen Kursraum und einer Anwendung für mobile Endgeräte (App) unterstützt.
Das gesteckte Ziel unserer Applikation ist es nicht nur die ständige Verfügbarkeit von Lernmaterialien zu ermöglichen, sondern auch die gesamte Kommunikation zwischen Dozenten und Studierenden, sowie Studierende unter sich, zu verändern. Eine Mischung aus E-Learning, Blended-Learning und Mobile- Learning soll es hier allen Teilnehmer ermöglichen, ortsungebunden zu agieren. Neue Funktionen sollen es den Studierenden ermöglichen, besser miteinander zu arbeiten, zu kooperieren und neue Bekanntschaften zu schließen.
Der vorliegende Beitrag befasst sich mit der Konstruktion eines Lehr-/ Lernszenarios polyvalenter Grundlagenvorlesungen in naturwissenschaftlichen Fachwissenschaften. Das Szenario verbindet klassische Vorlesungen mit virtuellen Elementen wie Online-Kursen, Online-Foren und Audience-Response-Systemen sowie dem Arbeiten in Kleingruppen mit Ansätzen des problemorientierten Lernens. Ziel ist es das Grundlagenwissen der Studierenden anzupassen, das Arbeiten in Gruppen zu fördern und problemorientiertes Lernen zu erlernen.
Bewegunglesen.com
(2014)
bewegunglesen.com (mit Silber bei den Best of Swiss Web Awards 2013 ausgezeichnet) ist ein E-Learning-Tool und bietet für Sportunterrichtende und Studierende eine webbasierte, interaktive Übungsgelegenheit, die Bewegungsanalyse und das kriteriengeleitete Verbessern von Fertigkeiten zu erlernen. Bewegungsabläufe mit ihren Kernbewegungen werden praxisnah und schulstufengerecht vermittelt. Daneben können auch Unterrichtsvideos hochgeladen, geschnitten, durch Grafiken und Fakten angereichert und innerhalb der Community geteilt werden. Aus den Clips lassen sich Übungen und Prüfungen mit Beurteilungskriterien des Bewegungsablaufs zusammenstellen, welche automatisiert ausgewertet werden.
Bei der hier vorgestellten Anwendung handelt es sich um den Prototypen einer webbasierten Applikation zur Analyse von Nutzungsdaten der AnwenderInnen von eLearning-Angeboten. Die Applikation wurde dabei explizit für unterschiedliche Zielgruppen wie eLearning-AnbieterInnen, Lehrende und WissenschaftlerInnen entworfen. Die Anwendung kann Daten verschiedener Lernplattformen auswerten und nutzt hierbei Methoden des Educational Data Mining. Sie unterstützt dabei sowohl klassische Plattformen mit personalisiertem Zugang sowie offene Plattformen, deren Angebote ohne Registrierung zugänglich sind.
Im Hochschulalltag begegnen sich Dozenten und Studenten als zwei Gruppen mit entgegengesetzten Rollen, z.B. als Lehrende und Lernende, als Forscher und studentische Hilfskräfte. Die Berührungspunkte beider Gruppen eröffnen neue Chancen im Student Life Cycle für zielgerichtete Kooperationen – von der Studierende für Entscheidungsprozesse im Fortgang ihres Studiums und Dozenten für die Gewinnung von Masterstudenten und Doktoranden profitieren können. Kein E-Learning im klassischen Sinn, aber eine praktische Anwendung von Empfehlungstechniken auf Basis eines Campus Management Systems werden Posterbeitrag und Live-Demo vorstellen, mit der zur Zeit verschiedene Zielrichtungen für die Empfehlung von Wahlmodulen und eine kohorten-basierte Informationsaufbereitung im Curricula Support experimentell untersucht werden.
LatteMATHEiato
(2013)
Eine zentrale Schwierigkeit des Lehrens an einer Hochschule besteht in der Heterogenität des Vorwissens der Studierenden. Vor dem Hintergrund, dass jedoch gerade das Vorwissen als einer der stärksten Prädiktoren des Lernerfolgs gilt (Hasselhorn, M., & Gold, A. (2009)), stellt sich die Frage, wie man diese Vorwissensunterschiede der Studienanfänger nivellieren kann. Dieser Bericht zeigt auf, welchen Versuch die Hochschule Furtwangen University (HFU) unternimmt, dem unterschiedlichen Vorwissen der Studierenden im Bereich der Mathematik- Grundlagenveranstaltungen durch medial aufbereitetes Selbstlernmaterial zu begegnen.
Das Projekt „Support for E-Learning“ (SupEr) fokussierte auf die dauerhafte Sicherung eines an der Fachhochschule verankerten Unterstützungsangebotes für Lehrende und Studierende in den Bereichen online-gestützte Lehrveranstaltungen und Veranstaltungsaufzeichnungen (E-Learning). Da dauerhafte personelle Ressourcen an der Fachhochschule begrenzt sind, erprobte das Projekt eine E-Learning Supportstruktur mit Hilfe von Studierenden. Das Supportkonzept betonte die Beratung von Professor/inn/en durch studentische Mitarbeiter/innen und unterschied sich dadurch von Konzepten anderer deutscher Hochschulen.
Der Artikel über das Projekt eTeam der Stabsstelle eLearning stellt einen reinen Praxisbericht dar und befasst sich zunächst mit der Idee, die zur Entwicklung des Vorhabens führte, in dem studentische eLearning-Expertinnen und -Experten Lehrende bei der Umsetzung von Online-Elementen in Lehrveranstaltungen unterstützen. Mit drei Teams in verschiedenen Fakultäten, jeweils bestehend aus zwei studentischen Mitarbeitenden, machte sich das Projekt im Sommersemester (SoSe) 2010 auf den Weg, praktisch an der Ruhr-Universität Bochum (RUB) ausgeführt zu werden. Die Projektvorbereitungen, Erfahrungen aus dem Pilotdurchgang sowie die Sicht der Fakultäten auf das Konzept und den aktuellen Projektstand werden in diesem Artikel beleuchtet und thematisiert.
Der vorliegende Artikel befasst sich mit Adventure-based Learning, einer digitalen, spielbasierten Lernmethode. Das Ziel derartiger Anwendungen ist es, Motivation und Begeisterung der Lernenden zu stimulieren und dadurch Lernprozesse zu unterstützen. Die Frage, inwiefern Adventure-based Learning tatsächlich lernförderliche Mehrwerte induziert, wird anhand einer Fallstudie im Kontext betrieblicher Weiterbildung untersucht. Hierzu wird ein Experiment mit 40 Probanden durchgeführt, in der eine Adventure-based Learning-Anwendung mit einer interaktiven Powerpoint-Präsentation verglichen wird. Die Ergebnisse der Untersuchung deuten darauf hin, dass viele Lernende sich Lernprogramme wünschen, die weniger textlastig sind und mit Adventure-based Learning eine vergleichbare Behaltensleistung erreicht wird.
Einsatz mobiler Endgeräte zur Verbesserung der Lehrqualität in universitären Großveranstaltungen
(2013)
Im vorliegenden Beitrag wird gezeigt, wie mit Hilfe mobiler Endgeräte von Studierenden und einer geeigneten technischen Infrastruktur auch in sehr großen Lehrveranstaltungen mit mehreren hundert Studierenden Lerneraktivierende, kooperative Lernprozesse initiiert werden können. Der Artikel stellt das bereits etablierte methodisch-didaktische Konzept der ‚Peer Instruction‘ (PI) vor, referiert Erkenntnisse zu dessen Lernwirksamkeit und weist auf dessen Einsatzmöglichkeiten in Informatik- und wirtschaftswissenschaftlichen Veranstaltungen hin. Die Architektur sowie die Funktionalität der webbasierten Clients von Studierenden und Dozenten werden erörtert. Erste Evaluationsergebnisse zum Praxiseinsatz dieses Konzepts mit der an der Universität Paderborn entwickelten technischen Infrastruktur werden beschrieben und Perspektiven der Weiterentwicklung des Systems vorgestellt.
Die Herausgeber der „Potsdamer Beiträge zur Sorabistik/Podstupimske pśinoski k Sorbistice“ sind erfreut, nach längerer Pause einen neuen Band veröffentlichen zu können. Gemeinschaftlich legen der Kulturwissenschaftler Tobias Preßler, welcher hier debütiert, und der ausgewiesene Denkmalpfleger i. R. Alfred Roggan, vier Artikel zur niedersorbischen Kulturgeschichte vor. Die Autoren widmen sich der sorbischen Sprache im Norden der Niederlausitz, ihrer ehemaligen Verbreitung und den Umständen ihres Verschwindens. Alle Beiträge nähern sich aus unterschiedlicher Perspektive diesem Thema, wobei die Schwerpunkte auf verschiedenen Zeiten und Regionen liegen. Mit Paul Thol wird sich einem Restaurator und Künstler zugewandt, dessen Werk und Schaffen in die bewegte 1. Hälfte des 20. Jahrhunderts fällt. Diese Zeit bildet gleichsam den Abschluss einer epochenübergreifenden Darstellung zur Politik gegenüber den Sorben und ihrer Sprache, welche ein weiterer Artikel skizziert. In den beiden Herzstücken des Bandes wird der Leser in die frühe Neuzeit entführt. Es wird ein bisher wenig beachtetes Druckwerk aus dem Jahre 1694 vorgestellt, das seinerzeit bewusst in zwölf Sprachen herausgegeben wurde. Als wahres Kleinod der sorbischen Sprachgeschichte findet sich dieses Werk – ein Gedicht – überliefert, das in einem nunmehr ausgestorbenen Dialektzweig verfasst ist. Neben dem Gedicht selbst, werden auch dessen bisherige literarische Bearbeitungen sowie der Entstehungshintergrund des Druckes eingehender beschrieben. Der vierte Beitrag widmet sich einer Region, in welcher wohl der gleiche Dialekt wie der des Gedichtes gesprochen wurde. Bis zum Verklingen der Sprache im 18. Jahrhundert war sie hier genauso lebendig wie sie es heute noch in ihrem Kerngebiet ist.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
Vom Monomer zum Glykopolymer
(2019)
Glykopolymere sind synthetische und natürlich vorkommende Polymere, die eine Glykaneinheit in der Seitenkette des Polymers tragen. Glykane sind durch die Glykan-Protein-Wechselwirkung verantwortlich für viele biologische Prozesse. Die Beteiligung der Glykanen in diesen biologischen Prozessen ermöglicht das Imitieren und Analysieren der Wechselwirkungen durch geeignete Modellverbindungen, z.B. der Glykopolymere. Dieses System der Glykan-Protein-Wechselwirkung soll durch die Glykopolymere untersucht und studiert werden, um die spezifische und selektive Bindung der Proteine an die Glykopolymere nachzuweisen. Die Proteine, die in der Lage sind, Kohlenhydratstrukturen selektiv zu binden, werden Lektine genannt.
In dieser Dissertationsarbeit wurden verschiedene Glykopolymere synthetisiert. Dabei sollte auf einen effizienten und kostengünstigen Syntheseweg geachtet werden.
Verschiedene Glykopolymere wurden durch funktionalisierte Monomere mit verschiedenen Zuckern, wie z.B. Mannose, Laktose, Galaktose oder N-Acetyl-Glukosamin als funktionelle Gruppe, hergestellt. Aus diesen funktionalisierten Glykomonomeren wurden über ATRP und RAFT-Polymerisation Glykopolymere synthetisiert.
Die erhaltenen Glykopolymere wurden in Diblockcopolymeren als hydrophiler Block angewendet und die Selbstassemblierung in wässriger Lösung untersucht. Die Polymere formten in wässriger Lösung Mizellen, bei denen der Zuckerblock an der Oberfläche der Mizellen sitzt. Die Mizellen wurden mit einem hydrophoben Fluoreszenzfarbstoff beladen, wodurch die CMC der Mizellenbildung bestimmt werden konnte.
Außerdem wurden die Glykopolymere als Oberflächenbeschichtung über „Grafting from“ mit SI-ATRP oder über „Grafting to“ auf verschiedene Oberflächen gebunden. Durch die glykopolymerbschichteten Oberflächen konnte die Glykan Protein Wechselwirkung über spektroskopische Messmethoden, wie SPR- und Mikroring Resonatoren untersucht werden. Hierbei wurde die spezifische und selektive Bindung der Lektine an die Glykopolymere nachgewiesen und die Bindungsstärke untersucht.
Die synthetisierten Glykopolymere könnten durch Austausch der Glykaneinheit für andere Lektine adressierbar werden und damit ein weites Feld an anderen Proteinen erschließen. Die bioverträglichen Glykopolymere wären alternativen für den Einsatz in biologischen Prozessen als Transporter von Medikamenten oder Farbstoffe in den Körper. Außerdem könnten die funktionalisierten Oberflächen in der Diagnostik zum Erkennen von Lektinen eingesetzt werden. Die Glykane, die keine selektive und spezifische Bindung zu Proteinen eingehen, könnten als antiadsorptive Oberflächenbeschichtung z.B. in der Zellbiologie eingesetzt werden.
Topologische Datenanalyse
(2019)
Bei der Analyse von höherdimensionalen Daten kann deren Gestalt wichtige Informationen über den Datensatz liefern. Bei einer gegebenen Punktwolke, die aus einem unbekannten topologischen Raum ausgewählt wurde, versucht die Topologische Datenanalyse (TDA) den ursprünglichen Raum zu rekonstruieren. Dieser Beitrag soll eine Einführung in die Topologische Datenanalyse geben und konzentriert sich dabei auf zwei wichtige Aspekte: die Persistente Homologie und den Mapper. Dabei werden zuerst die notwendigen theoretischen Grundlagen vorgestellt und anschließend wird die Methodik bei der Visualisierung von Daten eingesetzt.
Die Persistente Homologie ist eines der Standardwerkzeuge in der TDA. Sie findet ihre Anwendung beispielsweise in den Bereichen Formerkennung und -beschreibung. Der Mapper als zweites wichtiges Konzept der TDA wandelt umfangreiche, höherdimensionale Datensätze in Simplizialkomplexe um und kann dadurch geometrische und topologische Eigenschaften der Daten bestimmen. Des Weiteren ist die Mapper-Methode ein brauchbares Werkzeug zur Visualisierungen von mehrdimensionalen Daten, woran statistische Verfahren scheitern.
Expanding public or publicly subsidized childcare has been a top social policy priority in many industrialized countries. It is supposed to increase fertility, promote children’s development and enhance mothers’ labor market attachment. In this paper, we analyze the causal effect of one of the largest expansions of subsidized childcare for children up to three years among industrialized countries on the employment of mothers in Germany. Identification is based on spatial and temporal variation in the expansion of publicly subsidized childcare triggered by two comprehensive childcare policy reforms. The empirical analysis is based on the German Microcensus that is matched to county level data on childcare availability. Based on our preferred specification which includes time and county fixed effects we find that an increase in childcare slots by one percentage point increases mothers’ labor market participation rate by 0.2 percentage points. The overall increase in employment is explained by the rise in part-time employment with relatively long hours (20-35 hours per week). We do not find a change in full-time employment or lower part-time employment that is causally related to the childcare expansion. The effect is almost entirely driven by mothers with medium-level qualifications. Mothers with low education levels do not profit from this reform calling for a stronger policy focus on particularly disadvantaged groups in coming years.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.