Refine
Has Fulltext
- yes (13391) (remove)
Year of publication
Document Type
- Article (4018)
- Postprint (3294)
- Doctoral Thesis (2552)
- Monograph/Edited Volume (973)
- Review (571)
- Part of Periodical (492)
- Preprint (446)
- Master's Thesis (266)
- Conference Proceeding (246)
- Working Paper (245)
Language
- German (7064)
- English (6027)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (106)
- climate change (79)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Logopädie (72)
- Zeitschrift (71)
- Nachhaltigkeit (61)
- European Union (59)
Institute
- Extern (1405)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (718)
- Institut für Biochemie und Biologie (713)
- Wirtschaftswissenschaften (583)
- Institut für Chemie (556)
- Institut für Mathematik (521)
- Institut für Romanistik (514)
- Institut für Geowissenschaften (510)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
The present paper addresses a current view in the psycholinguistic literature that case exhibits processing properties distinct from those of other morphological features such as number (cf. Fodor & Inoue, 2000; Meng & Bader, 2000a/b). In a speeded-acceptability judgement experiment, we show that the low performance previously found for case in contrast to number violations is limited to nominative case, whereas violations involving accusative and dative are judged more accurately. The data thus do not support the proposal that case per se is associated with special properties (in contrast to other features such as number) in reanalysis processes. Rather, there are significant judgement differences between the object cases accusative and dative on the one hand and the subject nominative case on the other. This may be explained by the fact that nominative has a specific status in German (and many other languages) as a default case.
Glacial lakes in the Hindu Kush–Karakoram–Himalayas–Nyainqentanglha (HKKHN) region have grown rapidly in number and area in past decades, and some dozens have drained in catastrophic glacial lake outburst floods (GLOFs). Estimating regional susceptibility of glacial lakes has largely relied on qualitative assessments by experts, thus motivating a more systematic and quantitative appraisal. Before the backdrop of current climate-change projections and the potential of elevation-dependent warming, an objective and regionally consistent assessment is urgently needed. We use an inventory of 3390 moraine-dammed lakes and their documented outburst history in the past four decades to test whether elevation, lake area and its rate of change, glacier-mass balance, and monsoonality are useful inputs to a probabilistic classification model. We implement these candidate predictors in four Bayesian multi-level logistic regression models to estimate the posterior susceptibility to GLOFs. We find that mostly larger lakes have been more prone to GLOFs in the past four decades regardless of the elevation band in which they occurred. We also find that including the regional average glacier-mass balance improves the model classification. In contrast, changes in lake area and monsoonality play ambiguous roles. Our study provides first quantitative evidence that GLOF susceptibility in the HKKHN scales with lake area, though less so with its dynamics. Our probabilistic prognoses offer improvement compared to a random classification based on average GLOF frequency. Yet they also reveal some major uncertainties that have remained largely unquantified previously and that challenge the applicability of single models. Ensembles of multiple models could be a viable alternative for more accurately classifying the susceptibility of moraine-dammed lakes to GLOFs.
Pokhara (ca. 850 m a.s.l.), Nepal's second-largest city, lies at the foot of the Higher Himalayas and has more than tripled its population in the past 3 decades. Construction materials are in high demand in rapidly expanding built-up areas, and several informal settlements cater to unregulated sand and gravel mining in the Pokhara Valley's main river, the Seti Khola. This river is fed by the Sabche glacier below Annapurna III (7555 m a.s.l.), some 35 km upstream of the city, and traverses one of the steepest topographic gradients in the Himalayas. In May 2012 a sudden flood caused >70 fatalities and intense damage along this river and rekindled concerns about flood risk management. We estimate the flow dynamics and inundation depths of flood scenarios using the hydrodynamic model HEC-RAS (Hydrologic Engineering Center’s River Analysis System). We simulate the potential impacts of peak discharges from 1000 to 10 000 m3 s−1 on land cover based on high-resolution Maxar satellite imagery and OpenStreetMap data (buildings and road network). We also trace the dynamics of two informal settlements near Kaseri and Yamdi with high potential flood impact from RapidEye, PlanetScope, and Google Earth imagery of the past 2 decades. Our hydrodynamic simulations highlight several sites of potential hydraulic ponding that would largely affect these informal settlements and sites of sand and gravel mining. These built-up areas grew between 3- and 20-fold, thus likely raising local flood exposure well beyond changes in flood hazard. Besides these drastic local changes, about 1 % of Pokhara's built-up urban area and essential rural road network is in the highest-hazard zones highlighted by our flood simulations. Our results stress the need to adapt early-warning strategies for locally differing hydrological and geomorphic conditions in this rapidly growing urban watershed.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
Research on problem solving offers insights into how humans process task-related information and which strategies they use (Newell and Simon, 1972; Öllinger et al., 2014). Problem solving can be defined as the search for possible changes in one's mind (Kahneman, 2003). In a recent study, Adams et al. (2021) assessed whether the predominant problem solving strategy when making changes involves adding or subtracting elements. In order to do this, they used several examples of simple problems, such as editing text or making visual patterns symmetrical, either in naturalistic settings or on-line. The essence of the authors' findings is a strong preference to add rather than subtract elements across a diverse range of problems, including the stabilizing of artifacts, creating symmetrical patterns, or editing texts. More specifically, they succeeded in demonstrating that “participants were less likely to identify advantageous subtractive changes when the task did not (vs. did) cue them to consider subtraction, when they had only one opportunity (vs. several) to recognize the shortcomings of an additive search strategy or when they were under a higher (vs. lower) cognitive load” (Adams et al., 2021, p. 258).
Addition and subtraction are generally defined as de-contextualized mathematical operations using abstract symbols (Russell, 1903/1938). Nevertheless, understanding of both symbols and operations is informed by everyday activities, such as making or breaking objects (Lakoff and Núñez, 2000; Fischer and Shaki, 2018). The universal attribution of “addition bias” or “subtraction neglect” to problem solving activities is perhaps a convenient shorthand but it overlooks influential framing effects beyond those already acknowledged in the report and the accompanying commentary (Meyvis and Yoon, 2021).
Most importantly, while Adams et al.'s study addresses an important issue, their very method of verbally instructing participants, together with lack of control over several known biases, might render their findings less than conclusive. Below, we discuss our concerns that emerged from the identified biases, namely those regarding the instructions and the experimental materials. Moreover, we refer to research from mathematical cognition that provides new insights into Adams et al.'s findings.
Commentary
(2015)
Nach wie vor ist es der internationalen Gemeinschaft nicht gelungen, eine Lösung für die afghanische Krise zu präsentieren. Dabei macht die gegenwärtige Situation eine Beendigung des Kriegszustands sowie die Aufnahme von konstruktiven Verhandlungen unerlässlich. Die Genfer Verhandlungen der 1980er Jahre über den Abzug der Sowjetarmee aus Afghanistan könnten hierbei als Vorbild dienen.
In dieser Arbeit werden nichtlineare Kopplungsmechanismen von akustischen Oszillatoren untersucht, die zu Synchronisation führen können. Aufbauend auf die Fragestellungen vorangegangener Arbeiten werden mit Hilfe theoretischer und experimenteller Studien sowie mit Hilfe numerischer Simulationen die Elemente der Tonentstehung in der Orgelpfeife und die Mechanismen der gegenseitigen Wechselwirkung von Orgelpfeifen identifiziert. Daraus wird erstmalig ein vollständig auf den aeroakustischen und fluiddynamischen Grundprinzipien basierendes nichtlinear gekoppeltes Modell selbst-erregter Oszillatoren für die Beschreibung des Verhaltens zweier wechselwirkender Orgelpfeifen entwickelt. Die durchgeführten Modellrechnungen werden mit den experimentellen Befunden verglichen. Es zeigt sich, dass die Tonentstehung und die Kopplungsmechanismen von Orgelpfeifen durch das entwickelte Oszillatormodell in weiten Teilen richtig beschrieben werden. Insbesondere kann damit die Ursache für den nichtlinearen Zusammenhang von Kopplungsstärke und Synchronisation des gekoppelten Zwei-Pfeifen Systems, welcher sich in einem nichtlinearen Verlauf der Arnoldzunge darstellt, geklärt werden. Mit den gewonnenen Erkenntnissen wird der Einfluss des Raumes auf die Tonentstehung bei Orgelpfeifen betrachtet. Dafür werden numerische Simulationen der Wechselwirkung einer Orgelpfeife mit verschiedenen Raumgeometrien, wie z. B. ebene, konvexe, konkave, und gezahnte Geometrien, exemplarisch untersucht. Auch der Einfluss von Schwellkästen auf die Tonentstehung und die Klangbildung der Orgelpfeife wird studiert. In weiteren, neuartigen Synchronisationsexperimenten mit identisch gestimmten Orgelpfeifen, sowie mit Mixturen wird die Synchronisation für verschiedene, horizontale und vertikale Pfeifenabstände in der Ebene der Schallabstrahlung, untersucht. Die dabei erstmalig beobachteten räumlich isotropen Unstetigkeiten im Schwingungsverhalten der gekoppelten Pfeifensysteme, deuten auf abstandsabhängige Wechsel zwischen gegen- und gleichphasigen Sychronisationsregimen hin. Abschließend wird die Möglichkeit dokumentiert, das Phänomen der Synchronisation zweier Orgelpfeifen durch numerische Simulationen, also der Behandlung der kompressiblen Navier-Stokes Gleichungen mit entsprechenden Rand- und Anfangsbedingungen, realitätsnah abzubilden. Auch dies stellt ein Novum dar.
In dieser Arbeit werden die Effekte der Synchronisation nichtlinearer, akustischer Oszillatoren am Beispiel zweier Orgelpfeifen untersucht. Aus vorhandenen, experimentellen Messdaten werden die typischen Merkmale der Synchronisation extrahiert und dargestellt. Es folgt eine detaillierte Analyse der Übergangsbereiche in das Synchronisationsplateau, der Phänomene während der Synchronisation, als auch das Austreten aus der Synchronisationsregion beider Orgelpfeifen, bei verschiedenen Kopplungsstärken. Die experimentellen Befunde werfen Fragestellungen nach der Kopplungsfunktion auf. Dazu wird die Tonentstehung in einer Orgelpfeife untersucht. Mit Hilfe von numerischen Simulationen der Tonentstehung wird der Frage nachgegangen, welche fluiddynamischen und aero-akustischen Ursachen die Tonentstehung in der Orgelpfeife hat und inwiefern sich die Mechanismen auf das Modell eines selbsterregten akustischen Oszillators abbilden lässt. Mit der Methode des Coarse Graining wird ein Modellansatz formuliert.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
Das Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) verantwortet im Projekt „PSI-Potsdam“ die Unterstützung der Qualifizierung in der Medienbildung/Digitalisierung der Lehrkräftebildung an der Universität Potsdam. Der Schwerpunkt der Tätigkeit lag zunächst auf der mediendidaktischen Qualifizierung und wurde ab der zweiten Förderphase des Programms 2018 mit Akzentuierung auf Herausforderungen und Chancen von Digitalisierung für das Lehren und Lernen in der Lehrkräftebildung fortgeführt. In diesem Beitrag werden die Entwicklungen und Beiträge in Bezug auf die digitale Medienbildung und Digitalisierung an der Universität Potsdam für die Lehrkräftebildung dargestellt. Weiterführend werden Perspektiven aufgezeigt, wie zukünftig digitale Medienbildung in der universitären Lehrkräftebildung verankert werden kann.
This paper deals with the teaching of grammar in the English as a foreign language (EFL) classroom. In this context, a course book (English G 21 A2) is examined in regard to whether it is compatible with current theories about second language acquisition (SLA).
At the beginning of this paper, views on grammar teaching from the past and the present are summarized and this is followed by an analysis of the current curriculum concerning its guidelines for grammar teaching in the foreign language classroom. This analysis concludes that the curriculum of Brandenburg hardly gives any recommendations regarding the question of which grammatical phenomena should to be taught. This explains, at least partly, the important position course books take in the foreign language classroom. Teachers use them as a source of material as well has a guideline for which topics can be taught and in which order.
The following part gives an overview of cognitive models of SLA and foreign language teaching, among others Krashen’s Monitor Hypothesis, R. Ellis’ Weak Interface Model and Pienemann’s Processability Theory. On the basis of these models criteria for the ideal design of a course book, which would support grammar teaching according to current findings, are developed. Among those criteria are the offering of a lot of input in the target language, provision of practice activities and consciousness-raising activities, taking into consideration the sequence of acquisition and the provision of a diagnostic tool which enables the students to find out in which areas of the target language they need to improve. Furthermore, the inclusion of opportunities for (individual) revision is regarded as essential. All of those criteria are of course given under the reservation that the influence of course books on the happenings in the classroom is restricted as the final decisions are made by the teacher in the teaching situation.
In the analysis, one communicative intention which is usually a topic in the English lessons between the third and sixth year of learning is focused on. This communicative intention is talking about the future. First, the possibilities to express futurity in the English language are analysed and reduced for the use in teaching. The chosen course book is then described and analysed and the way the book deals with the topic of talking about the future is compared to the criteria which where specified earlier in the paper. This comparison showed that the book is compatible with SLA theories in many ways (e.g. concerning the explanations of grammatical structures) but that there is still room for improvement (e.g. concerning the amount of input and the number of consciousness-raising activities).
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
This article examines public service resilience during the COVID-19 pandemic and studies the switch to telework due to social distancing measures. We argue that the pandemic and related policies led to increasing demands on public organisations and their employees. Following the job demands-resources model, we argue that resilience only can arise in the presence of resources for buffering these demands. Survey data were collected from 1,189 German public employees, 380 participants were included for analysis. The results suggest that the public service was resilient against the crisis and that the shift to telework was not as demanding as expected.
Dieses Sonderheft der Schriftenreihe des Lehrstuhls für Public und Nonprofit Management präsentiert Ergebnisse eines studentischen Beratungsprojekts aus dem Wintersemester 2018/19. Dabei wurde eine Vision für eine digitalisierte öffentliche Verwaltung entworfen. Unter Anwendung von Szenariomethoden wurden Zukunftsszenarien entwickelt und getestet, die sich entweder mit Bürger*innen und Unternehmen als Kund*innen der Verwaltung, den öffentlich Beschäftigen oder der Aufbau- und Ablauforganisation in der Verwaltung beschäftigen.
Neue Talente braucht das Amt
(2018)
Stellenabbau und demografischer Wandel führen zu Rekrutierungsschwierigkeiten und einem Mangel an talentierten Nachwuchskräften in der öffentlichen Verwaltung. Gleichzeitig stellen sich neue Anforderungen an öffentliche Behörden, die eine andere Qualifikation der Beschäftigten nach sich ziehen. Gerade öffentliche Dienste mit geschlossenen Personalsystemen brauchen daher eine stärkere Flexibilität in der Nachwuchsrekrutierung und -ausbildung. Traineeprogramme können hier Abhilfe schaffen.
Traineeprogramme können im öffentlichen Dienst zur Rekrutierung und Ausbildung von Nachwuchskräften dienen. Solche Programme werden, im Gegensatz zur Privatwirtschaft, wo diese schon seit einigen Jahrzehnten Anwendung finden, im deutschen öffentlichen Dienst erst seit einigen Jahren durchgeführt. Eine erste empirische Erhebung zeigt nun, dass Traineeprogramme im öffentlichen Sektor gut geeignet sind, um Nachwuchskräfte auszubilden und in der Organisation zu sozialisieren. Als entscheidende Einflussfaktoren konnten eine klare Struktur des Programms, ein effektives off-the-job Training, bereichsübergreifende Projektarbeit, der Umfang der Betreuung der Trainees und der Einsatz der jeweiligen Behördenleitung für das Programm identifiziert werden. Deutlich wurde auch, dass während der Einführung eines Traineeprogramms die Trainee-Betreuer entsprechend vorbereitet werden müssen und in den Behörden Akzeptanz für diese neue Form der Nachwuchsgewinnung geschaffen werden muss (Change Management). Die Ergebnisse zeigen jedoch ebenso, dass sich solche Programme nicht zur Personalentwicklung bereits Beschäftigter eignen.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
Horomedon und Laokoon
(2011)
Nanostructured inorganic materials are routinely synthesized by the use of templates. Depending on the synthesis conditions of the product material, either “soft” or “hard” templates can be applied. For sol-gel processes, usually “soft” templating techniques are employed, while “hard” templates are used for high temperature synthesis pathways. In classical templating approaches, the template has the unique role of structure directing agent, in the sense that it is not participating to the chemical formation of the resulting material. This work investigates a new templating pathway to nanostructured materials, where the template is also a reagent in the formation of the final material. This concept is described as “reactive templating” and opens a synthetic path toward materials which cannot be synthesised on a nanometre scale by classical templating approaches. Metal nitrides are such kind of materials. They are usually produced by the conversion of metals or metal oxides in ammonia flow at high temperature (T > 1000°C), which make the application of classical templating techniques difficult. Graphitic carbon nitride, g-C3N4, despite its fundamental and theoretical importance, is probably one of the most promising materials to complement carbon in material science and many efforts are put in the synthesis of this material. A simple polyaddition/elimination reaction path at high temperature (T = 550°C) allows the polymerisation of cyanamide toward graphitic carbon nitride solids. By hard templating, using nanostructured silica or aluminium oxide as nanotemplates, a variety of nanostructured graphitic carbon nitrides such as nanorods, nanotubes, meso- and macroporous powders could be obtained by nanocasting or nanocoating. Due to the special semi-conducting properties of the graphitic carbon nitride matrix, the nanostructured graphitic carbon nitrides show unexpected catalytic activity for the activation of benzene in Friedel-Crafts type reactions, making this material an interesting metal free catalyst. Furthermore, due to the chemical composition of g-C3N4 and the fact that it is totally decomposed at temperatures between 600°C and 800°C even under inert atmosphere, g-C3N4 was shown to be a good nitrogen donor for the synthesis of early transition metal nitrides at high temperatures. Thus using the nanostructured carbon nitrides as “reactive templates” or “nanoreactors”, various metal nitride nanostructures, such as nanoparticles and porous frameworks could be obtained at high temperature. In this approach the carbon nitride nanostructure played both the role of the nitrogen source and of the exotemplate, imprinting its size and shape to the resulting metal nitride nanostructure.
Gegenstand der Arbeit ist die Beanspruchungssituation des Pflegepersonals im Krankenhausbereich. Es wird der Frage nachgegangen, mit welchem Verhaltens- und Erlebensmuster Pflegepersonen ihren Anforderungen gegenübertreten und wie sie über die Art und Weise der persönlichen Auseinandersetzung mit den Anforderungen ihre Beanspruchungsverhältnisse mitgestalten.Den theoretischen Ausgangspunkt der Arbeit bilden salutogenetisch orientierte Ressourcenmodelle, insbesondere Beckers Modell der seelischen Gesundheit (Becker, 1982, 1986). Nach ihm hängt der Gesundheitszustand einer Person davon ab, wie gut es ihr gelingt, externe und interne Anforderungen mithilfe externer und interner Ressourcen zu bewältigen. Hier knüpft das in der Arbeit im Mittelpunkt stehende diagnostische Instrument AVEM (Arbeitsbezogenes Verhaltens- und Erlebensmuster; Schaarschmidt & Fischer, 1996, 2001) an, das die Erfassung interner Anforderungen und Ressourcen der Person sowie deren Zuordnung zu 4 Verhaltens- und Erlebensmustern gegenüber der Arbeit unter Gesundheits- und Motivationsbezug ermöglicht.Mit den Hypothesen wird angenommen, dass in Anbetracht der problematischen Arbeitsbedingungen in der Pflege eine Zurücknahme im Engagement bzw. eine Schutzhaltung vor nicht gewollten und als unangemessen empfundenen Anforderungen sowie wenig beeinflussbaren Bedingungen im Vordergrund stehen. Dort, wo zumindest partiell gesundheitsförderliche und als herausfordernd erlebte Arbeitsbedingungen anzutreffen sind, sollten günstigere Musterkonstellationen auftreten. Wir vermuteten, dass sich die ungünstigen Tendenzen bereits in der Berufsausbildung und in frühen Berufsjahren zeigen. Musterveränderungen in gesundheits- und persönlichkeitsförderlicher Hinsicht sollten durch gezielte Intervention herbeigeführt werden können. Schließlich nahmen wir an, dass die Tätigkeit und die mit ihr verbundenen Anforderungen und Ausführungsbedingungen musterspezifisch wahrgenommen werden.Zur Beantwortung der Fragen werden Ergebnisse aus verschiedenen Quer- und Längsschnittuntersuchungen herangezogen, die in Wiener Spitälern und Krankenpflegeschulen, aber auch in deutschen Krankenhäusern durchgeführt wurden. Zu Vergleichszwecken werden Befunde anderer Berufsgruppen dargestellt. Neben dem AVEM wurden weitere Fragebögen zu folgenden Inhalten eingesetzt: Arbeitsbezogene Werte, Erleben von Ressourcen in der Pflegetätigkeit, Belastungserleben und Objektive Merkmale der Arbeitstätigkeit.Die Ergebnisse bestätigen die Hypothesen in allen wesentlichen Punkten. Im Vergleich mit anderen Berufsgruppen fallen für die Pflegekräfte deutliche Einschränkungen im Arbeitsengagement auf. In Bezug auf die gesundheitlichen Risikomuster nimmt das Pflegepersonal eine Mittelstellung ein. Die Musterdifferenzierung in der Pflegepopulation lässt die stärksten Unterschiede in Abhängigkeit von der Position erkennen: Je höher die Position, desto größer ist der Anteil des Gesundheitsmusters und desto geringer ist die Resignationstendenz. Die meisten Risikomuster zeigen sich bei den Pflegekräften mit der niedrigsten Qualifikation. Für Pflegeschüler ist ein zeitweiliges starkes Auftreten von resignativen Verhaltens- und Erlebensweisen sowie eine kontinuierliche Abnahme des Engagements kennzeichnend. Dieser Trend setzt sich nach Aufnahme der Berufstätigkeit fort. Nur gezielte intensive personenorientierte Interventionen erwiesen sich als geeignet, Musterveränderungen in gesundheits- und persönlichkeitsförderlicher Hinsicht zu erreichen. Die Tätigkeit und die mit ihr verbundenen Anforderungen und Ausführungsbedingungen werden musterspezifisch wahrgenommen, wobei Personen mit eingeschränktem Engagement bzw. mit einer Resignationstendenz wesentliche Tätigkeitsmerkmale, denen persönlichkeits- und gesundheitsförderliche Wirkung zugesprochen wird, für sich als wenig wichtig beurteilen und sich mehr Defizite im Verhalten gegenüber Patienten bescheinigen.Die Ergebnisse verweisen darauf, dass im Pflegeberuf vor allem die Zurückhaltung im Engagement Anlass für eine kritische Betrachtung sein muss. Das Problem "Burnout" stellt sich in seiner Bedeutung relativiert dar. Günstigere Voraussetzungen für die Aufrechterhaltung und Förderung der Gesundheit bestehen dort, wo im konkreten Arbeitsfeld ein erweiterter Tätigkeits- und Handlungsspielraum sowie mehr Verantwortung vorliegen. Diese Befunde stehen in Einklang mit arbeitspsychologischen Ressourcenmodellen. Die Befunde zu den Pflegeschülern verweisen auf teilweise ungünstige Eignungsvoraussetzungen der Auszubildenden und legen nahe, die Angemessenheit der Anforderungen in den Krankenpflegeschulen zu hinterfragen. Hinsichtlich der Möglichkeiten der Veränderung der Muster in gesundheits- und motivationsdienlicher Weise brachten die Ergebnisse zum Ausdruck, dass verhaltensbezogenen Maßnahmen ohne gleichzeitige bedingungsbezogene Interventionen wenig Erfolg beschieden ist. Mit Blick auf die musterspezifische Wahrnehmung der Tätigkeit und der mit ihr verbundenen Anforderungen und Ausführungsbedingungen ist schließlich grundsätzlich festzuhalten, dass arbeitspsychologische Konzepte, die hohen bzw. komplexen Anforderungen und umfangreichen Freiheitsgraden in der Arbeit grundsätzlich persönlichkeits- und gesundheitsförderliche Wirkungen zuschreiben, einer Relativierung durch eine differentielle Perspektive bedürfen. Die vorgefundene Interaktion von Persönlichkeit und Arbeitsbedingungen hat zur Konsequenz, dass Verhaltens- und Verhältnisprävention in untrennbarem Zusammenhang gesehen werden sollten.
Hintergrund: Etablierte Protein- und Nukleinsäure-basierte Methoden für den spezifischen Pathogennachweis sind nur unter standardisierten Laborbedingungen von geschultem Personal durchführbar und daher mit einem hohen Zeit- und Kostenaufwand verbunden. In der Nukleinsäure-basierten Diagnostik kann durch die Einführung der isothermalen Amplifikation eine schnelle und kostengünstige Alternative zur Polymerase-Kettenreaktion (PCR) verwendet werden. Die Loop-mediated isothermal amplification (LAMP) bietet aufgrund der hohen Amplifikationseffizienz vielfältige Detektionsmöglichkeiten, die sowohl für Schnelltest- als auch für Monitoring-Anwendungen geeignet sind.
Ein wesentliches Ziel dieser Arbeit war die Verbesserung der Anwendbarkeit der LAMP und die Entwicklung einer neuen Methode für den einfachen, schnellen und günstigen Nachweis von Pathogenen mittels alternativer DNA- oder Pyrophosphat-abhängiger Detektionsverfahren. Hier wurden zunächst direkte und indirekte Detektionsmethoden untersucht und darauf aufbauend ein Verfahren entwickelt, mit dem neue Metallionen-abhängige Fluoreszenzfarbstoffe für die selektive Detektion von Pyrophosphat in der LAMP und anderen enzymatischen Reaktionen identifiziert werden können. Als Alternative für die DNA-basierte Detektion in der digitalen LAMP sollten die zuvor etablierten Farbstoffe für den Pyrophosphatnachweis in einer Emulsion getestet werden. Abschließend wurde ein neuer Reaktionsmechanismus für die effiziente Generierung hochmolekularer DNA unter isothermalen Bedingungen als Alternative zur LAMP entwickelt.
Ergebnisse: Für den Nachweis RNA- und DNA-basierter Phythopathogene konnte die Echtzeit- und Endpunktdetektion mit verschiedenen Farbstoffen in einem geschlossenen System etabliert werden. Hier wurde Berberin als DNA-interkalierender Fluoreszenzfarbstoff mit vergleichbarer Sensitivität zu SYBR Green und EvaGreen erfolgreich in der LAMP mit Echtzeitdetektion eingesetzt. Ein Vorteil von Berberin gegenüber den anderen Farbstoffen ist die Toleranz der DNA-Polymerase auch bei hohen Farbstoffkonzentrationen. Berberin kann daher auch in der geschlossenen LAMP-Reaktion ohne zusätzliche Anpassung der Reaktionsbedingungen für die Endpunktdetektion verwendet werden. Darüber hinaus konnte Hydroxynaphtholblau (HNB), das für den kolorimetrischen Endpunktnachweis bekannt ist, erstmals auch für die fluorimetrische Detektion der LAMP in Echtzeit eingesetzt werden. Zusätzlich konnten in der Arbeit weitere Metallionen-abhängige Farbstoffe zur indirekten Detektion der LAMP über das Pyrophosphat identifiziert werden. Dafür wurde eine iterative Methode entwickelt, mit der potenzielle Farbstoffe hinsichtlich ihrer Enzymkompatibilität und ihrer spektralen Eigenschaften bei An- oder Abwesenheit von Manganionen selektiert werden können. Mithilfe eines kombinatorischen Screenings im Mikrotiterplattenformat konnte die komplexe Konzentrationsabhängigkeit zwischen den einzelnen Komponenten für einen fluorimetrischen Verdrängungsnachweis untersucht werden. Durch die Visualisierung des Signal-Rausch-Verhältnis’ als Intensitätsmatrix (heatmap) konnten zunächst Alizarinrot S und Tetrazyklin unter simulierten Reaktionsbedingungen selektiert werden. In der anschließenden enzymatischen LAMP-Reaktion konnte insbesondere Alizarinrot S als günstiger, nicht-toxischer und robuster Fluoreszenzfarbstoff identifiziert werden und zeigte eine Pyrophosphat-abhängige Zunahme der Fluoreszenzintensität. Die zuvor etablierten Farbstoffe (HNB, Calcein und Alizarinrot S) konnten anschließend erfolgreich für die indirekte, fluorimetrische Detektion von Pyrophosphat in einer LAMP-optimierten Emulsion eingesetzt werden. Die Stabilität und Homogenität der generierten Emulsion wurde durch den Zusatz des Emulgators Poloxamer 188 verbessert. Durch die fluoreszenzmikroskopische Analyse der Emulsion war eine eindeutige Diskriminierung der positiven und negativen Tröpfchen vor allem bei Einsatz von Calcein und Alizarinrot S möglich. Aufgrund des komplexen Primer-Designs und der hohen Wahrscheinlichkeit unspezifischer Amplifikation in der LAMP wurde eine neue Bst DNA-Polymerase-abhängige isothermale Amplifikationsreaktion entwickelt. Durch die Integration einer spezifischen Linkerstruktur (abasische Stelle oder Hexaethylenglykol) zwischen zwei Primersequenzen konnte ein bifunktioneller Primer die effiziente Regenerierung der Primerbindungsstellen gewährleisten. Der neue Primer induziert nach der spezifischen Hybridisierung auf dem Templat die Rückfaltung zu einer Haarnadelstruktur und blockiert gleichzeitig die Polymeraseaktivität am Gegenstrang, wodurch eine autozyklische Amplifikation trotz konstanter Reaktionstemperatur möglich ist. Die Effizienz der „Hinge-initiated Primer dependent Amplification“ (HIP) konnte abschließend durch die Verkürzung der Distanz zwischen einem modifizierten Hinge-Primer und einem PCR-ähnlichen Primer verbessert werden.
Schlussfolgerung: Die LAMP hat sich aufgrund der hohen Robustheit und Effizienz zu einer leistungsfähigen Alternative für die klassische PCR in der molekularbiologischen Diagnostik entwickelt. Unterschiedliche Detektionsverfahren verbessern die Leistungsfähigkeit der qualitativen und quantitativen LAMP für die Feldanwendungen und für die Diagnostik, da die neuen DNA- und Pyrophosphat-abhängigen Nachweismethoden in einer geschlossenen Reaktion eingesetzt werden können und so eine einfache Pathogendiagnostik ermöglichen. Die gezeigten Methoden können darüber hinaus zu einer Kostensenkung und Zeitersparnis gegenüber den herkömmlichen Methoden beitragen. Ein attraktives Ziel stellt die Weiterentwicklung der HIP für den Pathogennachweis als Alternative zur LAMP dar. Hierbei können die neuen LAMP-Detektionsverfahren ebenfalls Anwendung finden. Die Verwendung von Bst DNA-Polymerase-abhängigen Reaktionen ermöglicht darüber hinaus die Integration einer robusten isothermalen Amplifikation in mikrofluidische Systeme. Durch die Kombination der Probenvorbereitung, Amplifikation und Detektion sind zukünftige Anwendungen mit kurzer Analysezeit und geringem apparativen Aufwand insbesondere in der Pathogendiagnostik möglich.
Especially for the last twenty years, the studies of Linguistic Landscapes (LLs) have been gaining the status as an autonomous linguistic discipline. The LL of a (mostly) geographically limited area – which consists of e.g. billboards, posters, shop signs, material for election campaigns, etc. – gives deep insights into the presence or absence of languages in that particular area. Thus, LL not only allows to conclude from the presence of a language to its dominance, but also from its absence to the oppression of minorities, above all in areas where minority languages should – demographically seen – be visible. The LLs of big cities are fruitful research areas due to the mass of linguistic data. The first part of this paper deals with the theoretical and practical research that has been conducted in LL studies so far. A summary of the theory, methodologies and different approaches is given. In the second part I apply the theoretical basis to my own case study. For this, the LLs of two shopping streets in different areas of Hong Kong were examined in 2010. It seems likely that the linguistic competence of English must be rather high in Hong Kong, due to the long-lasting influence of British culture and mentality and the official status of the language. The case study's results are based on empirical data showing the objectively visible presence of English in both examined areas, as well as on two surveys. Those were conducted both openly and anonymously. The surveys are a reinsurance measuring the level of linguistic competence of English in Hong Kong. That level was defined before by an analysis of the LL. Hence, this case study is a new approach to LL analysis which does not end with the description of its material composition (as have done most studies before), but which rather includes its creators by asking in what way people's actual linguistic competence is reflected in Hong Kong's LL.
El flanco oriental de los Andes Centrales en el noroeste argentino es una zona caracterizada por serranías limitadas por fallas inversas que conforman un orógeno de piel gruesa activo con un patrón espacio-temporal no sistemático de deformación contraccional. Este patrón queda representado tanto por la dispersión de la actividad sísmica cortical como de la localización de las estructuras cuaternarias a través de la Cordillera Oriental y el Sistema de Santa Bárbara, configurando un frente orogénico difuso de más de 200 km de extensión. El estudio de la actividad neotectónica en esta región ha tomado más relevancia en los últimos años, mediante la aplicación de herramientas variadas, incluyendo técnicas de geomorfología tectónica, herramientas de teledetección, geodesia y estudios de campo convencionales. Los depósitos lacustres han demostrado ser, en numerosos ejemplos, excelentes marcadores de la actividad tectónica, dadas la horizontalidad original de sus capas y la susceptibilidad a los cambios del entorno. Es por ello que en este trabajo se analizaron los depósitos lacustres que afloran en el sector central de los valles Calchaquíes (región de Cafayate), para comprender cómo se acomoda la deformación cuaternaria en una de las cuencas intermontanas de la cuña orogénica activa.
El rumbo de las estructuras cuaternarias en el área de estudio es subparalelo al de las fallas que exhuman los cordones serranos circundantes. A partir del estudio estratigráfico, morfotectónico y estructural de los depósitos lacustres, se identificó un mínimo de cinco episodios de deformación afectando a la columna estratigráfica cuaternaria. Integrando perfiles estructurales balanceados con edades obtenidas en este trabajo y recopiladas de la bibliografía, se calcularon para el Pleistoceno mediotardío, tasas mínimas y máximas de acortamiento que varían entre 0,19-2,80 y 0,21-4,47 mm/a, respectivamente. Para comparar estos resultados con mediciones de la tectónica activa a escala regional se recopilaron datos de estaciones geodésicas del noroeste argentino, con los cuales se elaboró un perfil de velocidades horizontales. El perfil obtenido muestra un decrecimiento gradual de los vectores hacia el este, indicando actividad interna del orógeno en congruencia con los registros de actividad sísmica y compilación regional de las estructuras cuaternarias.
Además de la caracterización neotectónica de este sector de la Cordillera Oriental, el análisis estratigráfico de los depósitos lacustres ha permitido refinar la evolución geológica del sector central de los valles Calchaquíes durante el Cuaternario. De esta manera se han identificado al menos siete episodios de inundación lacustre relacionados con la desconexión del sistema fluvial con su nivel de base, dando lugar a sucesivos eventos de agradación y erosión. Las cotas máximas alcanzadas por los paleolagos, en conjunto con un modelo hidrológico previamente publicado para esta región, permitieron asimismo efectuar una comparación con el registro paleoclimático regional.
Los resultados de esta tesis representan un aporte significativo al conocimiento de la evolución tectónica y estratigráfica del sector central de los valles Calchaquíes durante el Cuaternario. Por otra parte, su integración a escala regional contribuye a comprender mejor la dinámica de la deformación en la cuña orogénica de piel gruesa del noroeste argentino.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
Many technical challenges still need to be overcome to improve the quality of the green coffee beans. In this work, the wet Arabica coffee processing in batch and continuous modus were investigated. Coffee beans samples as well as by-products and wastewaters collected at different production steps were analyzed in terms of their content in total phenols, antioxidant capacity, caffeine content, organic acids, reducing sugars, free amino group and protein content. The results showed that 40% of caffeine was removed with pulp. Green coffee beans showed highest concentration of organic acids and sucrose (4.96 ± 0.25 and 5.07 ± 0.39 g/100 g DW for the batch and continuous processing). Batch green coffee beans contained higher amount of phenols. 5-caffeoylquinic Acid (5-CQA) was the main constituent (67.1 and 66.0% for the batch and continuous processing, respectively). Protein content was 15 and 13% in the green coffee bean in batch and continuous processing, respectively. A decrease of 50 to 64% for free amino groups during processing was observed resulting in final amounts of 0.8 to 1.4% in the processed beans. Finally, the batch processing still revealed by-products and wastewater with high nutrient content encouraging a better concept for valorization.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost adsorbent with prospective to compete with commercial carbons. So far, very few studies have referred to the valorization of coffee parchment into activated carbon. Moreover, low-cost and efficient activation methods need to be more investigated. The aim of this work was to prepare activated carbon from spent coffee grounds and parchment, and to assess their adsorption performance. The co-calcination processing with calcium carbonate was used to prepare the activated carbons, and their adsorption capacity for organic acids, phenolic compounds and proteins was evaluated. Both spent coffee grounds and parchment showed yields after the calcination and washing treatments of around 9.0%. The adsorption of lactic acid was found to be optimal at pH 2. The maximum adsorption capacity of lactic acid with standard commercial granular activated carbon was 73.78 mg/g, while the values of 32.33 and 14.73 mg/g were registered for the parchment and spent coffee grounds activated carbons, respectively. The Langmuir isotherm showed that lactic acid was adsorbed as a monolayer and distributed homogeneously on the surface. Around 50% of total phenols and protein content from coffee wastewater were adsorbed after treatment with the prepared activated carbons, while 44, 43, and up to 84% of hydrophobic compounds were removed using parchment, spent coffee grounds and commercial activated carbon, respectively; the adsorption efficiencies of hydrophilic compounds ranged between 13 and 48%. Finally, these results illustrate the potential valorization of coffee by-products parchment and spent coffee grounds into activated carbon and their use as low-cost adsorbent for the removal of organic compounds from aqueous solutions.
The protein fraction, important for coffee cup quality, is modified during post-harvest treatment prior to roasting. Proteins may interact with phenolic compounds, which constitute the major metabolites of coffee, where the processing affects these interactions. This allows the hypothesis that the proteins are denatured and modified via enzymatic and/or redox activation steps. The present study was initiated to encompass changes in the protein fraction. The investigations were limited to major storage protein of green coffee beans. Fourteen Coffea arabica samples from various processing methods and countries were used. Different extraction protocols were compared to maintain the status quo of the protein modification. The extracts contained about 4–8 µg of chlorogenic acid derivatives per mg of extracted protein. High-resolution chromatography with multiple reaction monitoring was used to detect lysine modifications in the coffee protein. Marker peptides were allocated for the storage protein of the coffee beans. Among these, the modified peptides K.FFLANGPQQGGK.E and R.LGGK.T of the α-chain and R.ITTVNSQK.I and K.VFDDEVK.Q of β-chain were detected. Results showed a significant increase (p < 0.05) of modified peptides from wet processed green beans as compared to the dry ones. The present study contributes to a better understanding of the influence of the different processing methods on protein quality and its role in the scope of coffee cup quality and aroma. View Full-Text
Aus dem Inhalt:
• Herausforderungen staatlicher Schutzprogramme für Menschenrechtsverteidiger*innen in Lateinamerika
• Kopftuchverbote im privatrechtlichen Arbeitsverhältnis im Lichte des unionsrechtlichen Diskriminierungsschutzes
• Bericht über die Tätigkeit des Menschenrechtsausschusses der Vereinten Nationen im Jahre 2019 – Teil I: Staatenberichte
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
Das Verschwindenlassen
(2010)
Herausforderungen staatlicher Schutzprogramme für Menschenrechtsverteidiger*innen in Lateinamerika
(2020)
In der Lehre zur MCI (Mensch-Computer-Interaktion) stellt sich immer wieder die Herausforderung, praktische Übungen mit spannenden Ergebnissen durchzuführen, die sich dennoch nicht in technischen Details verlieren sondern MCI-fokussiert bleiben. Im Lehrmodul „Interaktionsdesign“ an der Universität Hamburg werden von Studierenden innerhalb von drei Wochen prototypische Interaktionskonzepte für das Spiel Neverball entworfen und praktisch umgesetzt. Anders als in den meisten Grundlagenkursen zur MCI werden hier nicht Mock-Ups, sondern lauffähige Software entwickelt. Um dies innerhalb der Projektzeit zu ermöglichen, wurde Neverball um eine TCP-basierte Schnittstelle erweitert. So entfällt die aufwändige Einarbeitung in den Quellcode des Spiels und die Studierenden können sich auf ihre Interaktionsprototypen konzentrieren. Wir beschreiben die Erfahrungen aus der
mehrmaligen Durchführung des Projektes und erläutern unser Vorgehen bei der Umsetzung. Die Ergebnisse sollen Lehrende im Bereich MCI unterstützen, ähnliche praxisorientierte Übungen mit Ergebnissen „zum Anfassen“ zu gestalten.
This paper investigates the structural properties of morphosyntactically marked focus constructions, focussing on the often neglected non-focal sentence part in African tone languages. Based on new empirical evidence from five Gur and Kwa languages, we claim that these focus expressions have to be analysed as biclausal constructions even though they do not represent clefts containing restrictive relative clauses. First, we relativize the partly overgeneralized assumptions about structural correspondences between the out-of-focus part and relative clauses, and second, we show that our data do in fact support the hypothesis of a clause coordinating pattern as present in clause sequences in narration. It is argued that we deal with a non-accidental, systematic feature and that grammaticalization may conceal such basic narrative structures.