Refine
Has Fulltext
- yes (448) (remove)
Year of publication
- 2018 (448) (remove)
Document Type
- Postprint (274)
- Doctoral Thesis (107)
- Article (30)
- Working Paper (18)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Part of Periodical (3)
- Conference Proceeding (2)
- Review (2)
- Habilitation Thesis (1)
Language
- English (448) (remove)
Keywords
- climate change (9)
- dynamics (7)
- adaptation (6)
- climate-change (6)
- expression (5)
- permafrost (5)
- inflammation (4)
- protein (4)
- remote sensing (4)
- stress (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (106)
- Institut für Geowissenschaften (43)
- Institut für Biochemie und Biologie (42)
- Humanwissenschaftliche Fakultät (33)
- Institut für Physik und Astronomie (32)
- Vereinigung für Jüdische Studien e. V. (25)
- Institut für Chemie (22)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (16)
- Department Sport- und Gesundheitswissenschaften (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (16)
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Landslides are frequent natural hazards in rugged terrain, when the resisting frictional force of the surface of rupture yields to the gravitational force. These forces are functions of geological and morphological factors, such as angle of internal friction, local slope gradient or curvature, which remain static over hundreds of years; whereas more dynamic triggering events, such as rainfall and earthquakes, compromise the force balance by temporarily reducing resisting forces or adding transient loads. This thesis investigates landslide distribution and orientation due to landslide triggers (e.g. rainfall) at different scales (6-4∙10^5 km^2) and aims to link rainfall movement with the landslide distribution. It additionally explores the local impacts of the extreme rainstorms on landsliding and the role of precursory stability conditions that could be induced by an earlier trigger, such as an earthquake.
Extreme rainfall is a common landslide trigger. Although several studies assessed rainfall intensity and duration to study the distribution of thus triggered landslides, only a few case studies quantified spatial rainfall patterns (i.e. orographic effect). Quantifying the regional trajectories of extreme rainfall could aid predicting landslide prone regions in Japan. To this end, I combined a non-linear correlation metric, namely event synchronization, and radial statistics to assess the general pattern of extreme rainfall tracks over distances of hundreds of kilometers using satellite based rainfall estimates. Results showed that, although the increase in rainfall intensity and duration positively correlates with landslide occurrence, the trajectories of typhoons and frontal storms were insufficient to explain landslide distribution in Japan. Extreme rainfall trajectories inclined northwestwards and were concentrated along some certain locations, such as coastlines of southern Japan, which was unnoticed in the landslide distribution of about 5000 rainfall-triggered landslides. These landslides seemed to respond to the mean annual rainfall rates.
Above mentioned findings suggest further investigation on a more local scale to better understand the mechanistic response of landscape to extreme rainfall in terms of landslides. On May 2016 intense rainfall struck southern Germany triggering high waters and landslides. The highest damage was reported at the Braunsbach, which is located on the tributary-mouth fan formed by the Orlacher Bach. Orlacher Bach is a ~3 km long creek that drains a catchment of about ~6 km^2. I visited this catchment in June 2016 and mapped 48 landslides along the creek. Such high landslide activity was not reported in the nearby catchments within ~3300 km^2, despite similar rainfall intensity and duration based on weather radar estimates. My hypothesis was that several landslides were triggered by rainfall-triggered flash floods that undercut hillslope toes along the Orlacher Bach. I found that morphometric features such as slope and curvature play an important role in landslide distribution on this micro scale study site (<10 km^2). In addition, the high number of landslides along the Orlacher Bach could also be boosted by accumulated damages on hillslopes due karst weathering over longer time scales.
Precursory damages on hillslopes could also be induced by past triggering events that effect landscape evolution, but this interaction is hard to assess independently from the latest trigger. For example, an earthquake might influence the evolution of a landscape decades long, besides its direct impacts, such as landslides that follow the earthquake. Here I studied the consequences of the 2016 Kumamoto Earthquake (MW 7.1) that triggered some 1500 landslides in an area of ~4000 km^2 in central Kyushu, Japan. Topography, i.e. local slope and curvature, both amplified and attenuated seismic waves, thus controlling the failure mechanism of those landslides (e.g. progressive). I found that topography fails in explaining the distribution and the preferred orientation of the landslides after the earthquake; instead the landslides were concentrated around the northeast of the rupture area and faced mostly normal to the rupture plane. This preferred location of the landslides was dominated mainly by the directivity effect of the strike-slip earthquake, which is the propagation of wave energy along the fault in the rupture direction; whereas amplitude variations of the seismic radiation altered the preferred orientation. I suspect that the earthquake directivity and the asymmetry of seismic radiation damaged hillslopes at those preferred locations increasing landslide susceptibility. Hence a future weak triggering event, e.g. scattered rainfall, could further trigger landslides at those damaged hillslopes.
Sub-seasonal thaw slump mass wasting is not consistently energy limited at the landscape scale
(2018)
Predicting future thaw slump activity requires a sound understanding of the atmospheric drivers and geomorphic controls on mass wasting across a range of timescales. On sub-seasonal timescales, sparse measurements indicate that mass wasting at active slumps is often limited by the energy available for melting ground ice, but other factors such as rainfall or the formation of an insulating veneer may also be relevant. To study the sub-seasonal drivers, we derive topographic changes from single-pass radar interferometric data acquired by the TanDEM-X satellites. The estimated elevation changes at 12m resolution complement the commonly observed planimetric retreat rates by providing information on volume losses. Their high vertical precision (around 30 cm), frequent observations (11 days) and large coverage (5000 km(2)) allow us to track mass wasting as drivers such as the available energy change during the summer of 2015 in two study regions. We find that thaw slumps in the Tuktoyaktuk coastlands, Canada, are not energy limited in June, as they undergo limited mass wasting (height loss of around 0 cm day 1) despite the ample available energy, suggesting the widespread presence of early season insulating snow or debris veneer. Later in summer, height losses generally increase (around 3 cm day 1), but they do so in distinct ways. For many slumps, mass wasting tracks the available energy, a temporal pattern that is also observed at coastal yedoma cliffs on the Bykovsky Peninsula, Russia. However, the other two common temporal trajectories are asynchronous with the available energy, as they track strong precipitation events or show a sudden speed-up in late August respectively. The observed temporal patterns are poorly related to slump characteristics like the headwall height. The contrasting temporal behaviour of nearby thaw slumps highlights the importance of complex local and temporally varying controls on mass wasting.
Necrotrophic as well as saprophytic small-spored Altemaria (A.) species are annually responsible for major losses of agricultural products, such as cereal crops, associated with the contamination of food and feedstuff with potential health-endangering Altemaria toxins. Knowledge of the metabolic capabilities of different species-groups to form mycotoxins is of importance for a reliable risk assessment. 93 Altemaria strains belonging to the four species groups Alternaria tenuissima, A. arborescens, A. altemata, and A. infectoria were isolated from winter wheat kernels harvested from fields in Germany and Russia and incubated under equal conditions. Chemical analysis by means of an HPLC-MS/MS multi-Alternaria-toxin-method showed that 95% of all strains were able to form at least one of the targeted 17 non-host specific Altemaria toxins. Simultaneous production of up to 15 (modified) Altemaria toxins by members of the A. tenuissima, A. arborescens, A. altemata species-groups and up to seven toxins by A. infectoria strains was demonstrated. Overall tenuazonic acid was the most extensively formed mycotoxin followed by alternariol and alternariol mono methylether, whereas altertoxin I was the most frequently detected toxin. Sulfoconjugated modifications of alternariol, alternariol mono methylether, altenuisol and altenuene were frequently determined. Unknown perylene quinone derivatives were additionally detected. Strains of the species-group A. infectoria could be segregated from strains of the other three species-groups due to significantly lower toxin levels and the specific production of infectopyrone. Apart from infectopyrone, alterperylenol was also frequently produced by 95% of the A. infectoria strains. Neither by the concentration nor by the composition of the targeted Altemaria toxins a differentiation between the species-groups A. altemata, A. tenuissima and A. arborescens was possible.
In challenging times for international law, there might be a heightened need for both analysis and prescription. The international rule of law as a connecting thread that goes through the global legal order is a particularly salient topic. By providing a working understanding of the content and contexts of the international rule of law, and by taking the regime of international investment law as a case study, this paper argues that assessing 'rise' or 'decline' motions in this sphere warrants a nuanced approach that should recognise parallel positive and negative developments. Whilst prominent procedural and substantive aspects of international investment law strongly align with the international rule of law requirements, numerous challenges threaten the future existence of the regime and appeal of international rule of law more broadly. At the same time, opportunities exist to adapt the substantive decision-making processes in investor-State disputes so to pursue parallel goals of enhancing rule of law at both international and national levels. Through recognising the specificities of interaction between international and national sphere, arbitrators can further reinvigorate the legitimacy of international rule of law through international investment law - benefitting thus the future of both.
This study examined psychometric properties of figure rating scales, particularly the effects of ascending silhouette ordering, in 153 children, 9 to 13 years old. Two versions of Collins’s (1991) figural rating scale were presented: the original scale (figures arranged ascendingly) and a modified version (randomized figure ordering. Ratings of current and ideal figure were elicited and body dissatisfaction was calculated. All children were randomly assigned to one of two subgroups and completed both scale versions in a different sequence. There were no significant differences in figure selection and body dissatisfaction between the two figure orderings. Regarding the selection of the current figure, results showed that girls are more affected by the silhouette ordering than boys. Our results suggest that figure rating scales are both valid and reliable, whereby correlation coefficients reveal greater stability for ideal figure selections and body dissatisfaction ratings when using the scale with ascending figure ordering.
Portal Wissen = Language
(2018)
Language is perhaps the most universal tool of human beings. It enables us to express ourselves, to communicate and understand, to help and get help, to create and share togetherness.
However, that does not completely capture the value of language. “Language belongs to the character of man,” said the English philosopher Sir Francis Bacon. If you believe the poet Johann Gottfried von Herder, a human is “only a human through language”. Ultimately, this means that we live in our world not with, but in, language. We not only describe our reality by means of language, but language is the device through which we open up the world in the first place. It is always there and shapes and influences us and the way we perceive, analyze, describe and ultimately determine everything around us.
Since it is so deeply connected with human nature, it is hardly surprising that our language has always been in the center of academic research – and not only in those fields that bear the name linguistics. Philosophy and media studies, neurology and psychology, computer science and semiotics – all of them are based on linguistic structures and their premises and possibilities.
Since July 2017, a scientific network at the University of Potsdam has been working on exactly this interface: the Collaborative Research Center “Limits of Variability in Language” (SFB 1287), funded by the German Research Foundation (DFG). Linguists, computer scientists, psychologists, and neurologists examine where language is or is not flexible. They hope to find out more about individual languages and their connections.
In this issue of Portal Wissen, we asked SFB spokeswoman Isabell Wartenburger and deputy spokesman Malte Zimmermann to talk about language, its variability and limits, and how they investigate these aspects. We also look over the shoulders of two researchers who are working on sub-projects: Germanist Heike Wiese and her team examine whether the pandemonium of the many different languages spoken at a weekly market in Berlin is creating a new language with its own rules. Linguist Doreen Georgi embarks on a typological journey around the world comparing about 30 languages to find out if they have common limits.
We also want to introduce other research projects at the University of Potsdam and the people behind them. We talk to biologists about biodiversity and ecological dynamics, and the founders of the startup “visionYOU” explain how entrepreneurship can be combined with social responsibility. Other discussions center round the effective production of antibodies and the question of whether the continued use of smartphones will eventually make us speechless. But do not worry: we did not run out of words – the magazine is full of them!
Enjoy your reading!
The Editors
This research addressed the question, if it is possible to simplify current microcontact printing systems for the production of anisotropic building blocks or patchy particles, by using common chemicals while still maintaining reproducibility, high precision and tunability of the Janus-balance
Chapter 2 introduced the microcontact printing materials as well as their defined electrostatic interactions. In particular polydimethylsiloxane stamps, silica particles and high molecular weight polyethylenimine ink were mainly used in this research. All of these components are commercially available in large quantities and affordable, which gives this approach a huge potential for further up-scaling developments. The benefits of polymeric over molecular inks was described including its flexible influence on the printing pressure. With this alteration of the µCP concept, a new method of solvent assisted particle release mechanism enabled the switch from two-dimensional surface modification to three-dimensional structure printing on colloidal silica particles, without changing printing parameters or starting materials. This effect opened the way to use the internal volume of the achieved patches for incorporation of nano additives, introducing additional physical properties into the patches without alteration of the surface chemistry.
The success of this system and its achievable range was further investigated in chapter 3 by giving detailed information about patch geometry parameters including diameter, thickness and yield. For this purpose, silica particles in a size range between 1µm and 5µm were printed with different ink concentrations to change the Janus-balance of these single patched particles. A necessary intermediate step, consisting of air-plasma treatment, for the production of trivalent particles using "sandwich" printing was discovered and comparative studies concerning the patch geometry of single and double patched particles were conducted. Additionally, the usage of structured PDMS stamps during printing was described. These results demonstrate the excellent precision of this approach and opens the pathway for even greater accuracy as further parameters can be finely tuned and investigated, e.g. humidity and temperature during stamp loading.
The performance of these synthesized anisotropic colloids was further investigated in chapter 4, starting with behaviour studies in alcoholic and aqueous dispersions. Here, the stability of the applied patches was studied in a broad pH range, discovering a release mechanism by disabling the electrostatic bonding between particle surface and polyelectrolyte ink. Furthermore, the absence of strong attractive forces between divalent particles in water was investigated using XPS measurements. These results lead to the conclusion that the transfer of small PDMS oligomers onto the patch surface is shielding charges, preventing colloidal agglomeration. However, based on this knowledge, further patch modifications for particle self-assembly were introduced including physical approaches using magnetic nano additives, chemical patch functionalization with avidin-biotin or the light responsive cyclodextrin-arylazopyrazoles coupling as well as particle surface modification for the synthesis of highly amphiphilic colloids. The successful coupling, its efficiency, stability and behaviour in different solvents were evaluated to find a suitable coupling system for future assembly experiments. Based on these results the possibility of more sophisticated structures by colloidal self-assembly is given.
Certain findings needed further analysis to understand their underlying mechanics, including the relatively broad patch diameter distribution and the decreasing patch thickness for smaller silica particles. Mathematical assumptions for both effects are introduced in chapter 5. First, they demonstrate the connection between the naturally occurring particle size distribution and the broadening of the patch diameter, indicating an even higher precision for this µCP approach. Second, explaining the increase of contact area between particle and ink surface due to higher particle packaging, leading to a decrease in printing pressure for smaller particles.
These calculations ultimately lead to the development of a new mechanical microcontact printing approach, using centrifugal forces for high pressure control and excellent parallel alignment of printing substrates. First results with this device and the comparison with previously conducted by-hand experiments conclude this research. It furthermore displays the advantages of such a device for future applications using a mechanical printing approach, especially for accessing even smaller nano particles with great precision and excellent yield.
In conclusion, this work demonstrates the successful adjustment of the µCP approach using commercially available and affordable silica particles and polyelectrolytes for high flexibility, reduced costs and higher scale-up value. Furthermore, its was possible to increase the modification potential by introducing three-dimensional patches for additional functionalization volume. While keeping a high colloidal stability, different coupling systems showed the self-assembly capabilities of this toolbox for anisotropic particles.
Draft Art. 15 CCAH attempts to strike a balance between State autonomy and robust judicial supervision. It largely follows Article 22 CERD conditioning the jurisdiction of the ICJ on prior negotiations. Hence, the substance of the clause is interpreted in light of the Court’s recent case law, especially Georgia v. Russia. Besides, several issues regarding the scope ratione temporis of the compromissory clause are discussed. The article advances several proposals to further improve the current draft, addressing the missing explicit reference to State responsibility, as well as the relationship between the Court and a possible treaty body, It also proposes to recalibrate the interplay of a requirement of prior negotiations respectively the seizing of a future treaty body on the one hand and provisional measures to be indicated by the Court on the other.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
X-ray free-electron lasers (XFELs) and table-top sources of x-rays based upon high harmonic generation (HHG) have revolutionized the field of ultrafast x-ray atomic and molecular physics, largely due to an explosive growth in capabilities in the past decade. XFELs now provide unprecedented intensity (1020 W cm−2) of x-rays at wavelengths down to ~1 Ångstrom, and HHG provides unprecedented time resolution (~50 attoseconds) and a correspondingly large coherent bandwidth at longer wavelengths. For context, timescales can be referenced to the Bohr orbital period in hydrogen atom of 150 attoseconds and the hydrogen-molecule vibrational period of 8 femtoseconds; wavelength scales can be referenced to the chemically significant carbon K-edge at a photon energy of ~280 eV (44 Ångstroms) and the bond length in methane of ~1 Ångstrom. With these modern x-ray sources one now has the ability to focus on individual atoms, even when embedded in a complex molecule, and view electronic and nuclear motion on their intrinsic scales (attoseconds and Ångstroms). These sources have enabled coherent diffractive imaging, where one can image non-crystalline objects in three dimensions on ultrafast timescales, potentially with atomic resolution. The unprecedented intensity available with XFELs has opened new fields of multiphoton and nonlinear x-ray physics where behavior of matter under extreme conditions can be explored. The unprecedented time resolution and pulse synchronization provided by HHG sources has kindled fundamental investigations of time delays in photoionization, charge migration in molecules, and dynamics near conical intersections that are foundational to AMO physics and chemistry. This roadmap coincides with the year when three new XFEL facilities, operating at Ångstrom wavelengths, opened for users (European XFEL, Swiss-FEL and PAL-FEL in Korea) almost doubling the present worldwide number of XFELs, and documents the remarkable progress in HHG capabilities since its discovery roughly 30 years ago, showcasing experiments in AMO physics and other applications. Here we capture the perspectives of 17 leading groups and organize the contributions into four categories: ultrafast molecular dynamics, multidimensional x-ray spectroscopies; high-intensity x-ray phenomena; attosecond x-ray science.
Enlisted History
(2018)
Zeev Jawitz (1847–1924) was active in all spheres of culture: history, language, literature and pedagogy, all the while striving for harmonization with the Orthodox outlook. He understood that a people returning to its homeland needed a national culture, one that was both broad and deep, and that the narrow world of the Halakhah would no longer suffice. His main work was the multi-volume Toldot Israel (History of Israel, published 1895–1924) which encompasses Jewish history from its beginning – Patriarchs – until the end of the 19th century. His historical writing, with its emphasis on internal religious Jewish sources, the unity and continuity of Jewish history, and respect of Orthodox principles, comes as an alternative to the historiography of the celebrated historian Heinrich Graetz. The alternative that Jawitz tried to substitute for Wissenschaft des Judentums, was influenced not only by Orthodox ideology, which he supported, but also by his nationalist ideology. He saw himself and his disciples as the “priests of memory,” presenting the true and immanent history and character of the Jewish nation as a platform to the Jewish future in the land of Israel.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 °C, 87 °C and 95 °C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
Time-dependent correlation function based methods to study optical spectroscopy involving electronic transitions can be traced back to the work of Heller and coworkers. This intuitive methodology can be expected to be computationally efficient and is applied in the current work to study the vibronic absorption, emission, and resonance Raman spectra of selected organic molecules. Besides, the "non-standard" application of this approach to photoionization processes is also explored. The application section consists of four chapters as described below.
In Chapter 4, the molar absorptivities and vibronic absorption/emission spectra of perylene and several of its N-substituted derivatives are investigated. By systematically varying the number and position of N atoms, it is shown that the presence of nitrogen heteroatoms has a negligible effect on the molecular structure and geometric distortions upon electronic transitions, while spectral properties are more sensitive: In particular the number of N atoms is important while their position is less decisive. Thus, N-substitution can be used to fine-tune the optical properties of perylene-based molecules.
In Chapter 5, the same methods are applied to study the vibronic absorption/emission and resonance Raman spectra of a newly synthesized donor-acceptor type molecule. The simulated absorption/emission spectra agree fairly well with experimental data, with discrepancies being attributed to solvent effects. Possible modes which may dominate the fine-structure in the vibronic spectra are proposed by analyzing the correlation function with the aid of Raman and resonance Raman spectra.
In the next two chapters, besides the above types of spectra, the methods are extended to study photoelectron spectra of several small diamondoid-related systems (molecules, radicals, and cations). Comparison of the photoelectron spectra with available experimental data suggests that the correlation function based approach can describe ionization processes reasonably well. Some of these systems, cationic species in particular, exhibit somewhat peculiar optical behavior, which presents them as possible candidates for functional devices.
Correlation function based methods in a more general sense can be very versatile. In fact, besides the above radiative processes, formulas for non-radiative processes such as internal conversion have been derived in literature. Further implementation of the available methods is among our next goals.
By using 3-year global positioning system (GPS)measurements from December 2013 to November 2016, we provide in this study a detailed survey on the climatology of the GPS signal loss of Swarm onboard receivers. Our results show that the GPS signal losses prefer to occur at both low latitudes between ±5 and ±20 ◦ magnetic latitude (MLAT) and high latitudes above 60 ◦ MLAT in both hemispheres. These events at all latitudes are observed mainly during equinoxes and December solstice months, while totally absent during June solstice months. At low latitudes the GPS signal losses are caused by the equatorial plasma irregularities shortly after sunset, and at high latitude they are also highly related to the large density gradients associated with ionospheric irregularities. Additionally, the high-latitude events are more often observed in the Southern Hemisphere, occurring mainly at the cusp region and along nightside auroral latitudes. The signal losses mainly happen for those GPS rays with elevation angles less than 20 ◦ , and more commonly occur when the line of sight between GPS and Swarm satellites is aligned with the shell structure of plasma irregularities. Our results also confirm that the capability of the Swarm receiver has been improved after the bandwidth of the phase-locked loop (PLL) widened, but the updates cannot radically avoid the interruption in tracking GPS satellites caused by the ionospheric plasma irregularities. Additionally, after the PLL bandwidth increased larger than 0.5 Hz, some unexpected signal losses are observed even at middle latitudes, which are not related to the ionospheric plasma irregularities. Our results suggest that rather than 1.0 Hz, a PLL bandwidth of 0.5 Hz is a more suitable value for the Swarm receiver.
This dissertation consists of four self-contained papers that deal with the implications of financial market imperfections and heterogeneity. The analysis mainly relates to the class of incomplete-markets models but covers different research topics.
The first paper deals with the distributional effects of financial integration for developing countries. Based on a simple heterogeneous-agent approach, it is shown that capital owners experience large welfare losses while only workers moderately gain due to higher wages. The large welfare losses for capital owners contrast with the small average welfare gains from representative-agent economies and indicate that a strong opposition against capital market opening has to be expected.
The second paper considers the puzzling observation of capital flows from poor to rich countries and the accompanying changes in domestic economic development. Motivated by the mixed results from the literature, we employ an incomplete-markets model with different types of idiosyncratic risk and borrowing constraints. Based on different scenarios, we analyze under what conditions the presence of financial market imperfections contributes to explain the empirical findings and how the conditions may change with different model assumptions.
The third paper deals with the interplay of incomplete information and financial market imperfections in an incomplete-markets economy. In particular, it analyzes the impact of incomplete information about idiosyncratic income shocks on aggregate saving. The results show that the effect of incomplete information is not only quantitatively substantial but also qualitatively ambiguous and varies with the influence of the income risk and the borrowing constraint.
Finally, the fourth paper analyzes the influence of different types of fiscal rules on the response of key macroeconomic variables to a government spending shock. We find that a strong temporary increase in public debt contributes to stabilizing consumption and leisure in the first periods following the change in government spending, whereas a non-debt-intensive fiscal rule leads to a faster recovery of consumption, leisure, capital and output in later periods. Regarding optimal debt policy, we find that a debt-intensive fiscal rule leads to the largest aggregate welfare benefit and that the individual welfare gain is particularly high for wealth-poor agents.
While the role of and consequences of being a bystander to face-to-face bullying has received some attention in the literature, to date, little is known about the effects of being a bystander to cyberbullying. It is also unknown how empathy might impact the negative consequences associated with being a bystander of cyberbullying. The present study focused on examining the longitudinal association between bystander of cyberbullying depression, and anxiety, and the moderating role of empathy in the relationship between bystander of cyberbullying and subsequent depression and anxiety. There were 1,090 adolescents (M-age = 12.19; 50% female) from the United States included at Time 1, and they completed questionnaires on empathy, cyberbullying roles (bystander, perpetrator, victim), depression, and anxiety. One year later, at Time 2, 1,067 adolescents (M-age = 13.76; 51% female) completed questionnaires on depression and anxiety. Results revealed a positive association between bystander of cyberbullying and depression and anxiety. Further, empathy moderated the positive relationship between bystander of cyberbullying and depression, but not for anxiety. Implications for intervention and prevention programs are discussed.
The purpose of the present study was to examine the moderation of parental mediation in the longitudinal association between being a bystander of cyberbullying and cyberbullying perpetration and cyberbullying victimization. Participants were 1067 7th and 8th graders between 12 and 15 years old (51% female) from six middle schools in predominantly middle-class neighborhoods in the Midwestern United States. Increases in being bystanders of cyberbullying was related positively to restrictive and instructive parental mediation. Restrictive parental mediation was related positively to Time 2 (T2) cyberbullying victimization, while instructive parental mediation was negatively related to T2 cyberbullying perpetration and victimization. Restrictive parental mediation was a moderator in the association between bystanders of cyberbullying and T2 cyberbullying victimization. Increases in restrictive parental mediation strengthened the positive relationship between these variables. In addition, instructive mediation moderated the association between bystanders of cyberbullying and T2 cyberbullying victimization such that increases in this form of parental mediation strategy weakened the association between bystanders of cyberbullying and T2 cyberbullying victimization. The current findings indicate a need for parents to be aware of how they can impact adolescents’ involvement in cyberbullying as bullies and victims. In addition, greater attention should be given to developing parental intervention programs that focus on the role of parents in helping to mitigate adolescents’ likelihood of cyberbullying involvement.
At Saturn electrons are trapped in the planet's magnetic field and accelerated to relativistic energies to form the radiation belts, but how this dramatic increase in electron energy occurs is still unknown. Until now the mechanism of radial diffusion has been assumed but we show here that in-situ acceleration through wave particle interactions, which initial studies dismissed as ineffectual at Saturn, is in fact a vital part of the energetic particle dynamics there. We present evidence from numerical simulations based on Cassini spacecraft data that a particular plasma wave, known as Z-mode, accelerates electrons to MeV energies inside 4 R-S (1 R-S = 60,330 km) through a Doppler shifted cyclotron resonant interaction. Our results show that the Z-mode waves observed are not oblique as previously assumed and are much better accelerators than O-mode waves, resulting in an electron energy spectrum that closely approaches observed values without any transport effects included.
Basaltic fissure eruptions, such as on Hawai'i or on Iceland, are thought to be driven by the lateral propagation of feeder dikes and graben subsidence. Associated solid earth processes, such as deformation and structural development, are well studied by means of geophysical and geodetic technologies. The eruptions themselves, lava fountaining and venting dynamics, in turn, have been much less investigated due to hazardous access, local dimension, fast processes, and resulting poor data availability.
This thesis provides a detailed quantitative understanding of the shape and dynamics of lava fountains and the morphological changes at their respective eruption sites. For this purpose, I apply image processing techniques, including drones and fixed installed cameras, to the sequence of frames of video records from two well-known fissure eruptions in Hawai'i and Iceland. This way I extract the dimensions of multiple lava fountains, visible in all frames. By putting these results together and considering the acquisition times of the frames I quantify the variations in height, width and eruption velocity of the lava fountains. Then I analyse these time-series in both time and frequency domains and investigate the similarities and correlations between adjacent lava fountains. Following this procedure, I am able to link the dynamics of the individual lava fountains to physical parameters of the magma transport in the feeder dyke of the fountains.
The first case study in this thesis focuses on the March 2011 Pu'u'O'o eruption, Hawai'i, where a continuous pulsating behaviour at all eight lava fountains has been observed. The lava fountains, even those from different parts of the fissure that are closely connected, show a similar frequency content and eruption behaviour. The regular pattern in the heights of lava fountain suggests a controlling process within the magma feeder system like a hydraulic connection in the underlying dyke, affecting or even controlling the pulsating behaviour.
The second case study addresses the 2014-2015 Holuhraun fissure eruption, Iceland. In this case, the feeder dyke is highlighted by the surface expressions of graben-like structures and fault systems. At the eruption site, the activity decreases from a continuous line of fire of ~60 vents to a limited number of lava fountains. This can be explained by preferred upwards magma movements through vertical structures of the pre-eruptive morphology. Seismic tremors during the eruption reveal vent opening at the surface and/or pressure changes in the feeder dyke. The evolving topography of the cinder cones during the eruption interacts with the lava fountain behaviour. Local variations in the lava fountain height and width are controlled by the conduit diameter, the depth of the lava pond and the shape of the crater. Modelling of the fountain heights shows that long-term eruption behaviour is controlled mainly by pressure changes in the feeder dyke.
This research consists of six chapters with four papers, including two first author and two co-author papers. It establishes a new method to analyse lava fountain dynamics by video monitoring. The comparison with the seismicity, geomorphologic and structural expressions of fissure eruptions shows a complex relationship between focussed flow through dykes, the morphology of the cinder cones, and the lava fountain dynamics at the vents of a fissure eruption.
The genesis of chronic pain is explained by a biopsychosocial model. It hypothesizes an interdependency between environmental and genetic factors provoking aberrant long-term changes in biological and psychological regulatory systems. Physiological effects of psychological and physical stressors may play a crucial role in these maladaptive processes. Specifically, long-term demands on the stress response system may moderate central pain processing and influence descending serotonergic and noradrenergic signals from the brainstem, regulating nociceptive processing at the spinal level. However, the underlying mechanisms of this pathophysiological interplay still remain unclear. This paper aims to shed light on possible pathways between physical (exercise) and psychological stress and the potential neurobiological consequences in the genesis and treatment of chronic pain, highlighting evolving concepts and promising research directions in the treatment of chronic pain. Two treatment forms (exercise and mindfulness-based stress reduction as exemplary therapies), their interaction, and the dose-response will be discussed in more detail, which might pave the way to a better understanding of alterations in the pain matrix and help to develop future prevention and therapeutic concepts
Background Low back pain (LBP) is a common pain syndrome in athletes, responsible for 28% of missed training days/year. Psychosocial factors contribute to chronic pain development. This study aims to investigate the transferability of psychosocial screening tools developed in the general population to athletes and to define athlete-specific thresholds.
Methods Data from a prospective multicentre study on LBP were collected at baseline and 1-year follow-up (n=52 athletes, n=289 recreational athletes and n=246 non-athletes). Pain was assessed using the Chronic Pain Grade questionnaire. The psychosocial Risk Stratification Index (RSI) was used to obtain prognostic information regarding the risk of chronic LBP (CLBP). Individual psychosocial risk profile was gained with the Risk Prevention Index – Social (RPI-S). Differences between groups were calculated using general linear models and planned contrasts. Discrimination thresholds for athletes were defined with receiver operating characteristics (ROC) curves.
Results Athletes and recreational athletes showed significantly lower psychosocial risk profiles and prognostic risk for CLBP than non-athletes. ROC curves suggested discrimination thresholds for athletes were different compared with non-athletes. Both screenings demonstrated very good sensitivity (RSI=100%; RPI-S: 75%–100%) and specificity (RSI: 76%–93%; RPI-S: 71%–93%). RSI revealed two risk classes for pain intensity (area under the curve (AUC) 0.92(95% CI 0.85 to 1.0)) and pain disability (AUC 0.88(95% CI 0.71 to 1.0)).
Conclusions Both screening tools can be used for athletes. Athlete-specific thresholds will improve physicians’ decision making and allow stratified treatment and prevention.
German football stadiums are well known for their atmosphere. It is often described as ‘electrifying,’ or ‘cracking.’ This article focuses on this atmosphere. Using a phenomenological approach, it explores how this emotionality can be understood and how geography matters while attending a match. Atmosphere in this context is conceptualized based on work by as a mood-charged space, neither object- nor subject-centered, but rather a medium of perception which cannot not exist. Based on qualitative research done in the home stadium of Hertha BSC in the German Bundesliga, this article shows that the bodily sensations experienced by spectators during a visit to the stadium are synchronized with events on the pitch and with the more or less imposing scenery. The analysis of in situ diaries reveals that spectators experience a comprehensive sense of collectivity. The study presents evidence that the occurrence of these bodily sensations is strongly connected with different aspects of spatiality. This includes sensations of constriction and expansion within the body, an awareness of one’s location within the stadium, the influence of the immediate surroundings and cognitive here/there and inside/outside distinctions.
Epigenetic modifications, of which DNA methylation is the most stable, are a mechanism conveying environmental information to subsequent generations via parental germ lines. The paternal contribution to adaptive processes in the offspring might be crucial, but has been widely neglected in comparison to the maternal one. To address the paternal impact on the offspring's adaptability to changes in diet composition, we investigated if low protein diet (LPD) in F0 males caused epigenetic alterations in their subsequently sired sons. We therefore fed F0 male Wild guinea pigs with a diet lowered in protein content (LPD) and investigated DNA methylation in sons sired before and after their father's LPD treatment in both, liver and testis tissues. Our results point to a 'heritable epigenetic response' of the sons to the fathers' dietary change. Because we detected methylation changes also in the testis tissue, they are likely to be transmitted to the F2 generation. Gene-network analyses of differentially methylated genes in liver identified main metabolic pathways indicating a metabolic reprogramming ('metabolic shift'). Epigenetic mechanisms, allowing an immediate and inherited adaptation may thus be important for the survival of species in the context of a persistently changing environment, such as climate change.
Quantifying rock weakening due to decreasing calcite mineral content by numerical simulations
(2018)
The quantification of changes in geomechanical properties due to chemical reactions is of paramount importance for geological subsurface utilisation, since mineral dissolution generally reduces rock stiffness. In the present study, the effective elastic moduli of two digital rock samples, the Fontainebleau and Bentheim sandstones, are numerically determined based on micro-CT images. Reduction in rock stiffness due to the dissolution of 10% calcite cement by volume out of the pore network is quantified for three synthetic spatial calcite distributions (coating, partial filling and random) using representative sub-cubes derived from the digital rock samples. Due to the reduced calcite content, bulk and shear moduli decrease by 34% and 38% in maximum, respectively. Total porosity is clearly the dominant parameter, while spatial calcite distribution has a minor impact, except for a randomly chosen cement distribution within the pore network. Moreover, applying an initial stiffness reduced by 47% for the calcite cement results only in a slightly weaker mechanical behaviour. Using the quantitative approach introduced here substantially improves the accuracy of predictions in elastic rock properties compared to general analytical methods, and further enables quantification of uncertainties related to spatial variations in porosity and mineral distribution.
Hyenas (family Hyaenidae), as the sister group to cats (family Felidae), represent a deeply diverging branch within the cat-like carnivores (Feliformia). With an estimated population size of <10,000 individuals worldwide, the brown hyena (Parahyaena brunnea) represents the rarest of the four extant hyena species and has been listed as Near Threatened by the IUCN. Here, we report a high-coverage genome from a captive bred brown hyena and both mitochondrial and low-coverage nuclear genomes of 14 wild-caught brown hyena individuals from across southern Africa. We find that brown hyena harbor extremely low genetic diversity on both the mitochondrial and nuclear level, most likely resulting from a continuous and ongoing decline in effective population size that started similar to 1 Ma and dramatically accelerated towards the end of the Pleistocene. Despite the strikingly low genetic diversity, we find no evidence of inbreeding within the captive bred individual and reveal phylogeographic structure, suggesting the existence of several potential subpopulations within the species.
The sequencing of the human genome in the early 2000s led to an increased interest in cheap and fast sequencing technologies. This interest culminated in the advent of next generation sequencing (NGS). A number of different NGS platforms have arisen since then all promising to do the same thing, i.e. produce large amounts of genetic information for relatively low costs compared to more traditional methods such as Sanger sequencing. The capabilities of NGS meant that researchers were no longer bound to species for which a lot of previous work had already been done (e.g. model organisms and humans) enabling a shift in research towards more novel and diverse species of interest. This capability has greatly benefitted many fields within the biological sciences, one of which being the field of evolutionary biology. Researchers have begun to move away from the study of laboratory model organisms to wild, natural populations and species which has greatly expanded our knowledge of evolution. NGS boasts a number of benefits over more traditional sequencing approaches. The main benefit comes from the capability to generate information for drastically more loci for a fraction of the cost. This is hugely beneficial to the study of wild animals as, even when large numbers of individuals are unobtainable, the amount of data produced still allows for accurate, reliable population and species level results from a small selection of individuals.
The use of NGS to study species for which little to no previous research has been carried out on and the production of novel evolutionary information and reference datasets for the greater scientific community were the focuses of this thesis. Two studies in this thesis focused on producing novel mitochondrial genomes from shotgun sequencing data through iterative mapping, bypassing the need for a close relative to serve as a reference sequence. These mitochondrial genomes were then used to infer species level relationships through phylogenetic analyses. The first of these studies involved reconstructing a complete mitochondrial genome of the bat eared fox (Otocyon megalotis). Phylogenetic analyses of the mitochondrial genome confidently placed the bat eared fox as sister to the clade consisting of the raccoon dog and true foxes within the canidae family. The next study also involved reconstructing a mitochondrial genome but in this case from the extinct Macrauchenia of South America. As this study utilised ancient DNA, it involved a lot of parameter testing, quality controls and strict thresholds to obtain a near complete mitochondrial genome devoid of contamination known to plague ancient DNA studies. Phylogenetic analyses confidently placed Macrauchenia as sister to all living representatives of Perissodactyla with a divergence time of ~66 million years ago. The third and final study of this thesis involved de novo assemblies of both nuclear and mitochondrial genomes from brown and striped hyena and focussed on demographic, genetic diversity and population genomic analyses within the brown hyena. Previous studies of the brown hyena hinted at very low levels of genomic diversity and, perhaps due to this, were unable to find any notable population structure across its range. By incorporating a large number of genetic loci, in the form of complete nuclear genomes, population structure within the brown hyena was uncovered. On top of this, genomic diversity levels were compared to a number of other species. Results showed the brown hyena to have the lowest genomic diversity out of all species included in the study which was perhaps caused by a continuous and ongoing decline in effective population size that started about one million years ago and dramatically accelerated towards the end of the Pleistocene.
The studies within this thesis show the power NGS sequencing has and its utility within evolutionary biology. The most notable capabilities outlined in this thesis involve the study of species for which no reference data is available and in the production of large amounts of data, providing evolutionary answers at the species and population level that data produced using more traditional techniques simply could not.
Moving arms
(2018)
Embodied cognition postulates a bi-directional link between the human body and its cognitive functions. Whether this holds for higher cognitive functions such as problem solving is unknown. We predicted that arm movement manipulations performed by the participants could affect the problem-solving solutions. We tested this prediction in quantitative reasoning tasks that allowed two solutions to each problem (addition or subtraction). In two studies with healthy adults (N=53 and N=50), we found an effect of problem-congruent movements on problem solutions. Consistent with embodied cognition, sensorimotor information gained via right or left arm movements affects the solution in different types of problem-solving tasks.
This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.
Three Essays on EFRAG
(2018)
This cumulative doctoral thesis consists of three papers that deal with the role of one specific European accounting player in the international accounting standard-setting, namely the European Financial Reporting Advisory Group (EFRAG). The first paper examines whether and how EFRAG generally fulfills its role in articulating Europe’s interests toward the International Accounting Standards Board (IASB). The qualitative data from the conducted interviews reveal that EFRAG influences the IASB’s decision making at a very early stage, long before other constituents are officially asked to comment on the IASB’s proposals. The second paper uses quantitative data and investigates the formal participation behavior of European constituents that seek to determine EFRAG’s voice. More precisely, this paper analyzes the nature of the constituents’ participation in EFRAG’s due process in terms of representation (constituent groups and geographical distribution) and the drivers of their participation behavior. EFRAG’s official decision making process is dominated by some specific constituent groups (such as preparers and the accounting profession) and by constituents from some specific countries (e.g. those with effective enforcement regimes). The third paper investigates in a first step who of the European constituents choose which lobbying channel (participation only at IASB, only at EFRAG, or at both institutions) and unveils in a second step possible reasons for their lobbying choices. The paper comprises quantitative and qualitative data. It reveals that English skills, time issues, the size of the constituent, and the country of origin are factors that can explain why the majority participates only in the IASB’s due process.
This paper describes an almost forgotten chapter in the relatively short history of Jewish- Buddhist interactions. The popularization of Buddhism in Germany in the second half of 19th century, effected mainly by its positive appraisal in the philosophy of Arthur Schopenhauer, made it a common referent for both critics of Judaism and Christianity as well as their defenders. At the same time, Judaism was viewed by many as a historically antiquated religion and Jewish elements in Christianity were regarded as impediments to the progress of European religiosity and culture. Schopenhauerian conception of “pessimistic” Buddhism and “optimistic” Judaism as the two most distant religious ideas was proudly appropriated by many Jewish thinkers. These Jews portrayed Buddhism as an anti-worldly and anti-social religion of egoistic individuals who seek their own salvation (i. e. annihilation into Nothingness), the most extreme form of pessimism and asceticism which negates every being, will, work, social structures and transcendence. Judaism, in contrast, represented direct opposites of all the aforementioned characteristics. In comparisons to Buddhism, Judaism stood out as a religion which carried the most needed social and psychological values for a healthy modern society: decisive affirmation of the world, optimism, social activity, co-operation with others, social egalitarianism, true charitability, and religious purity free from all remnants of polytheism, asceticism, and the inefficiently excessive moral demands ascribed to both Buddhism and Christianity. Through the analysis of texts by Ludwig Philippson, Ludwig Stein, Leo Baeck, Max Eschelbacher, Juda Bergmann, Fritz-Leopold Steinthal, Elieser David and others, this paper tries to show how the image of Buddhism as an antithesis to Judaism helped the German Jewish reform thinkers in defining the “essence of Judaism” and in proving to both Jewish and Christian audiences its enduring meaningfulness and superiority for the modern society.
Symptoms of anxiety and depression in young athletes using the Hospital Anxiety and Depression Scale
(2018)
Elite young athletes have to cope with multiple psychological demands such as training volume, mental and physical fatigue, spatial separation of family and friends or time management problems may lead to reduced mental and physical recovery. While normative data regarding symptoms of anxiety and depression for the general population is available (Hinz and Brahler, 2011), hardly any information exists for adolescents in general and young athletes in particular. Therefore, the aim of this study was to assess overall symptoms of anxiety and depression in young athletes as well as possible sex differences. The survey was carried out within the scope of the study "Resistance Training in Young Athletes" (KINGS-Study). Between August 2015 and September 2016, 326 young athletes aged (mean +/- SD) 14.3 +/- 1.6 years completed the Hospital Anxiety and Depression Scale (HAD Scale). Regarding the analysis of age on the anxiety and depression subscales, age groups were classified as follows: late childhood (12-14 years) and late adolescence (15-18 years). The participating young athletes were recruited from Olympic weight lifting, handball, judo, track and field athletics, boxing, soccer, gymnastics, ice speed skating, volleyball, and rowing. Anxiety and depression scores were (mean +/- SD) 4.3 +/- 3.0 and 2.8 +/- 2.9, respectively. In the subscale anxiety, 22 cases (6.7%) showed subclinical scores and 11 cases (3.4%) showed clinical relevant score values. When analyzing the depression subscale, 31 cases (9.5%) showed subclinical score values and 12 cases (3.7%) showed clinically important values. No significant differences were found between male and female athletes (p >= 0.05). No statistically significant differences in the HADS scores were found between male athletes of late childhood and late adolescents (p >= 0.05). To the best of our knowledge, this is the first report describing questionnaire based indicators of symptoms of anxiety and depression in young athletes. Our data implies the need for sports medical as well as sports psychiatric support for young athletes. In addition, our results demonstrated that the chronological classification concerning age did not influence HAD Scale outcomes. Future research should focus on sports medical and sports psychiatric interventional approaches with the goal to prevent anxiety and depression as well as teaching coping strategies to young athletes.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade. (C) 2018 The Author(s) Published by S. Karger AG, Basel
High-latitude treeless ecosystems represent spatially highly heterogeneous landscapes with small net carbon fluxes and a short growing season. Reliable observations and process understanding are critical for projections of the carbon balance of the climate-sensitive tundra. Space-borne remote sensing is the only tool to obtain spatially continuous and temporally resolved information on vegetation greenness and activity in remote circumpolar areas. However, confounding effects from persistent clouds, low sun elevation angles, numerous lakes, widespread surface inundation, and the sparseness of the vegetation render it highly challenging. Here, we conduct an extensive analysis of the timing of peak vegetation productivity as shown by satellite observations of complementary indicators of plant greenness and photosynthesis. We choose to focus on productivity during the peak of the growing season, as it importantly affects the total annual carbon uptake. The suite of indicators are as follows: (1) MODIS-based vegetation indices (VIs) as proxies for the fraction of incident photosynthetically active radiation (PAR) that is absorbed (fPAR), (2) VIs combined with estimates of PAR as a proxy of the total absorbed radiation (APAR), (3) sun-induced chlorophyll fluorescence (SIF) serving as a proxy for photosynthesis, (4) vegetation optical depth (VOD), indicative of total water content and (5) empirically upscaled modelled gross primary productivity (GPP). Averaged over the pan-Arctic we find a clear order of the annual peak as APAR ≦ GPP<SIF<VIs/VOD. SIF as an indicator of photosynthesis is maximised around the time of highest annual temperatures. The modelled GPP peaks at a similar time to APAR. The time lag of the annual peak between APAR and instantaneous SIF fluxes indicates that the SIF data do contain information on light-use efficiency of tundra vegetation, but further detailed studies are necessary to verify this. Delayed peak greenness compared to peak photosynthesis is consistently found across years and land-cover classes. A particularly late peak of the normalised difference vegetation index (NDVI) in regions with very small seasonality in greenness and a high amount of lakes probably originates from artefacts. Given the very short growing season in circumpolar areas, the average time difference in maximum annual photosynthetic activity and greenness or growth of 3 to 25 days (depending on the data sets chosen) is important and needs to be considered when using satellite observations as drivers in vegetation models.
This essay sets out to theorize the “new” Arctic Ocean as a pivot from
which our standard map of the world is currently being
reconceptualized. Drawing on theories from the fields of Atlantic
and Pacific studies, I argue that the changing Arctic, characterized
by melting ice and increased accessibility, must be understood
both as a space of transit that connects Atlantic and Pacific worlds
in unprecedented ways, and as an oceanic world and contact
zone in its own right. I examine both functions of the Arctic via a
reading of the dispute over the Northwest Passage (which
emphasizes the Arctic as a space of transit) and the contemporary
assessment of new models of sovereignty in the Arctic region
(which concentrates on the circumpolar Arctic as an oceanic
world). However, both of these debates frequently exclude
indigenous positions on the Arctic. By reading Canadian Inuit
theories on the Arctic alongside the more prominent debates, I
argue for a decolonizing reading of the Arctic inspired by Inuit
articulations of the “Inuit Sea.” In such a reading, Inuit conceptions
provide crucial interventions into theorizing the Arctic. They also,
in turn, contribute to discussions on indigeneity, sovereignty, and
archipelagic theory in Atlantic and Pacific studies.
Hatred directed at members of groups due to their origin, race, gender, religion, or sexual orientation is not new, but it has taken on a new dimension in the online world. To date, very little is known about online hate among adolescents. It is also unknown how online disinhibition might influence the association between being bystanders and being perpetrators of online hate. Thus, the present study focused on examining the associations among being bystanders of online hate, being perpetrators of online hate, and the moderating role of toxic online disinhibition in the relationship between being bystanders and perpetrators of online hate. In total, 1480 students aged between 12 and 17 years old were included in this study. Results revealed positive associations between being online hate bystanders and perpetrators, regardless of whether adolescents had or had not been victims of online hate themselves. The results also showed an association between toxic online disinhibition and online hate perpetration. Further, toxic online disinhibition moderated the relationship between being bystanders of online hate and being perpetrators of online hate. Implications for prevention programs and future research are discussed.
Although school climate and self-efficacy have received some attention in the literature, as correlates of students’ willingness to intervene in bullying, to date, very little is known about the potential mediating role of self-efficacy in the relationship between classroom climate and students’ willingness to intervene in bullying. To this end, the present study analyzes whether the relationship between classroom cohesion (as one facet of classroom climate) and students’ willingness to intervene in bullying situations is mediated by self-efficacy in social conflicts. This study is based on a representative stratified random sample of two thousand and seventy-one students (51.3% male), between the ages of twelve and seventeen, from twenty-four schools in Germany. Results showed that between 43% and 48% of students reported that they would not intervene in bullying. A mediation test using the structural equation modeling framework revealed that classroom cohesion and self-efficacy in social conflicts were directly associated with students’ willingness to intervene in bullying situations. Furthermore, classroom cohesion was indirectly associated with higher levels of students’ willingness to intervene in bullying situations, due to self-efficacy in social conflicts. We thus conclude that: (1) It is crucial to increase students’ willingness to intervene in bullying; (2) efforts to increase students’ willingness to intervene in bullying should promote students’ confidence in dealing with social conflicts and interpersonal relationships; and (3) self-efficacy plays an important role in understanding the relationship between classroom cohesion and students’ willingness to intervene in bullying. Recommendations are provided to help increase adolescents’ willingness to intervene in bullying and for future research.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of −5 to −17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
Poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) ferroelectric thin films of different molar ratio have been studied with regard to data memory applications. Therefore, films with thicknesses of 200 nm and less have been spin coated from solution. Observations gained from single layers have been extended to multilayer capacitors and three terminal transistor devices.
Besides conventional hysteresis measurements, the measurement of dielectric non-linearities has been used as a main tool of characterisation. Being a very sensitive and non-destructive method, non-linearity measurements are well suited for polarisation readout and property studies. Samples have been excited using a high quality, single-frequency sinusoidal voltage with an amplitude significantly smaller than the coercive field of the samples. The response was then measured at the excitation frequency and its higher harmonics. Using the measurement results, the linear and non-linear dielectric permittivities ɛ₁, ɛ₂ and ɛ₃ have been determined. The permittivities have been used to derive the temperature-dependent polarisation behaviour as well as the polarisation state and the order of the phase transitions.
The coercive field in VDF-TrFE copolymers is high if compared to their ceramic competitors. Therefore, the film thickness had to be reduced significantly. Considering a switching voltage of 5 V and a coercive field of 50 MV/m, the film thickness has to be 100 nm and below. If the thickness becomes substantially smaller than the other dimensions, surface and interface layer effects become more pronounced. For thicker films of P(VDF-TrFE) with a molar fraction of 56/44 a second-order phase transition without a thermal hysteresis for an ɛ₁(T) temperature cycle has been predicted and observed. This however, could not be confirmed by the measurements of thinner films. A shift of transition temperatures as well as a temperature independent, non-switchable polarisation and a thermal hysteresis for P(VDF-TrFE) 56/44 have been observed. The impact of static electric fields on the polarisation and the phase transition has therefore been studied and simulated, showing that all aforementioned phenomena including a linear temperature dependence of the polarisation might originate from intrinsic electric fields.
In further experiments the knowledge gained from single layer capacitors has been extended to bilayer copolymer thin films of different molar composition. Bilayers have been deposited by succeeding cycles of spin coating from solution. Single layers and their bilayer combination have been studied individually in order to prove the layers stability. The individual layers have been found to be physically stable. But while the bilayers reproduced the main ɛ₁(T) properties of the single layers qualitatively, quantitative numbers could not be explained by a simple serial connection of capacitors. Furthermore, a linear behaviour of the polarisation throughout the measured temperature range has been observed. This was found to match the behaviour predicted considering a constant electric field.
Retention time is an important quantity for memory applications. Hence, the retention behaviour of VDF-TrFE copolymer thin films has been determined using dielectric non-linearities. The polarisation loss in P(VDF-TrFE) poled samples has been found to be less than 20% if recorded over several days. The loss increases significantly if the samples have been poled with lower amplitudes, causing an unsaturated polarisation. The main loss was attributed to injected charges. Additionally, measurements of dielectric non-linearities have been proven to be a sensitive and non-destructive tool to measure the retention behaviour.
Finally, a ferroelectric field effect transistor using mainly organic materials (FerrOFET) has been successfully studied. DiNaphtho[2,3-b:2',3'-f]Thieno[3,2-b]Thiophene (DNTT) has proven to be a stable, suitable organic semiconductor to build up ferroelectric memory devices. Furthermore, an oxidised aluminium bottom electrode and additional dielectric layers, i.e. parylene C, have proven to reduce the leakage current and therefore enhance the performance significantly.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
Deoxyribonucleic acid (DNA) is the carrier of human genetic information and is exposed to environmental influences such as the ultraviolet (UV) fraction of sunlight every day. The photostability of the DNA against UV light is astonishing. Even if the DNA bases have a strong absorption maximum at around 260 nm/4.77 eV, their quantum yield of photoproducts remains very low 1. If the photon energies exceed the ionization energy (IE) of the nucleobases ( ̴ 8-9 eV) 2, the DNA can be severely damaged. Photoexcitation and -ionization reactions occur, which can induce strand breaks in the DNA. The efficiency of the excitation and ionization induced strand breaks in the target DNA sequences are represented by cross sections. If Si as a substrate material is used in the VUV irradiation experiments, secondary electrons with an energy below 3.6 eV are generated from the substrate. This low energy electrons (LEE) are known to induce dissociative electron attachment (DEA) in DNA and with it DNA strand breakage very efficiently. LEEs play an important role in cancer radiation therapy, since they are generated secondarily along the radiation track of ionizing radiation.
In the framework of this thesis, different single stranded DNA sequences were irradiated with 8.44 eV vacuum UV (VUV) light and cross sections for single strand breaks (SSB) were determined. Several sequences were also exposed to secondary LEEs, which additionally contributed to the SSBs. First, the cross sections for SSBs depending on the type of nucleobases were determined. Both types of DNA sequences, mono-nucleobase and mixed sequences showed very similar results upon VUV radiation. The additional influence of secondarily generated LEEs resulted in contrast in a clear trend for the SSB cross sections. In this, the polythymine sequence had the highest cross section for SSBs, which can be explained by strong anionic resonances in this energy range. Furthermore, SSB cross sections were determined as a function of sequence length. This resulted in an increase in the strand breaks to the same extent as the increase in the geometrical cross section. The longest DNA sequence (20 nucleotides) investigated in this series, however, showed smaller cross section values for SSBs, which can be explained by conformational changes in the DNA. Moreover, several DNA sequences that included the radiosensitizers 5-Bromouracil (5BrU) and 8-Bromoadenine (8BrA) were investigated and the corresponding SSB cross sections were determined. It was shown that 5BrU reacts very strongly to VUV radiation leading to high strand break yields, which showed in turn a strong sequence-dependency. 8BrA, on the other hand, showed no sensitization to the applied VUV radiation, since almost no increase in strand breakage yield was observed in comparison to non-modified DNA sequences.
In order to be able to identify the mechanisms of radiation damage by photons, the IEs of certain DNA sequences were further explored using photoionization tandem mass spectrometry. By varying the DNA sequence, both the IEs depending on the type of nucleobase as well as on the DNA strand length could be identified and correlated to the SSB cross sections. The influence of the IE on the photoinduced reaction in the brominated DNA sequences could be excluded.
The aim of this doctoral thesis was to establish a technique for the analysis of biomolecules with infrared matrix-assisted laser dispersion (IR-MALDI) ion mobility (IM) spectrometry. The main components of the work were the characterization of the IR-MALDI process, the development and characterization of different ion mobility spectrometers, the use of IR-MALDI-IM spectrometry as a robust, standalone spectrometer and the development of a collision cross-section estimation approach for peptides based on molecular dynamics and thermodynamic reweighting.
First, the IR-MALDI source was studied with atmospheric pressure ion mobility spectrometry and shadowgraphy. It consisted of a metal capillary, at the tip of which a self-renewing droplet of analyte solution was met by an IR laser beam. A relationship between peak shape, ion desolvation, diffusion and extraction pulse delay time (pulse delay) was established. First order desolvation kinetics were observed and related to peak broadening by diffusion, both influenced by the pulse delay. The transport mechanisms in IR-MALDI were then studied by relating different laser impact positions on the droplet surface to the corresponding ion mobility spectra. Two different transport mechanisms were determined: phase explosion due to the laser pulse and electrical transport due to delayed ion extraction. The velocity of the ions stemming from the phase explosion was then measured by ion mobility and shadowgraphy at different time scales and distances from the source capillary, showing an initially very high but rapidly decaying velocity. Finally, the anatomy of the dispersion plume was observed in detail with shadowgraphy and general conclusions over the process were drawn.
Understanding the IR-MALDI process enabled the optimization of the different IM spectrometers at atmospheric and reduced pressure (AP and RP, respectively). At reduced pressure, both an AP and an RP IR-MALDI source were used. The influence of the pulsed ion extraction parameters (pulse delay, width and amplitude) on peak shape, resolution and area was systematically studied in both AP and RP IM spectrometers and discussed in the context of the IR-MALDI process. Under RP conditions, the influence of the closing field and of the pressure was also examined for both AP and RP sources. For the AP ionization RP IM spectrometer, the influence of the inlet field (IF) in the source region was also examined. All of these studies led to the determination of the optimal analytical parameters as well as to a better understanding of the initial ion cloud anatomy.
The analytical performance of the spectrometer was then studied. Limits of detection (LOD) and linear ranges were determined under static and pulsed ion injection conditions and interpreted in the context of the IR-MALDI mechanism. Applications in the separation of simple mixtures were also illustrated, demonstrating good isomer separation capabilities and the advantages of singly charged peaks. The possibility to couple high performance liquid chromatography (HPLC) to IR-MALDI-IM spectrometry was also demonstrated. Finally, the reduced pressure spectrometer was used to study the effect of high reduced field strength on the mobility of polyatomic ions in polyatomic gases.
The last focus point was on the study of peptide ions. A dataset obtained with electrospray IM spectrometry was characterized and used for the calibration of a collision cross-section (CCS) determination method based on molecular dynamics (MD) simulations at high temperature. Instead of producing candidate structures which are evaluated one by one, this semi-automated method uses the simulation as a whole to determine a single average collision cross-section value by reweighting the CCS of a few representative structures. The method was compared to the intrinsic size parameter (ISP) method and to experimental results. Additional MD data obtained from the simulations was also used to further analyze the peptides and understand the experimental results, an advantage with regard to the ISP method. Finally, the CCS of peptide ions analyzed by IR-MALDI were also evaluated with both ISP and MD methods and the results compared to experiment, resulting in a first validation of the MD method. Thus, this thesis brings together the soft ionization technique that is IR-MALDI, which produces mostly singly charged peaks, with ion mobility spectrometry, which can distinguish between isomers, and a collision cross-section determination method which also provides structural information on the analyte at hand.
Feeling Half-Half?
(2018)
Growing up in multicultural environments, Turkish-heritage individuals in
Europe face specific challenges in combining their multiple cultural iden-
tities to form a coherent sense of self. Drawing from social identity com-
plexity, this study explores four modes of combining cultural identities and
their variation in relational contexts. Problem-centered interviews with
Turkish-heritage young adults in Austria revealed the preference for com-
plex, supranational labels, such as multicultural. Furthermore, most partici-
pants described varying modes of combining cultural identities over time
and across relational contexts. Social exclusion experiences throughout
adolescence related to perceived conflict of cultural identities, whereas
multicultural peer groups supported perceived compatibility of cultural
identities. Findings emphasize the need for complex, multidimensional
approaches to study ethnic minorities’ combination of cultural identities.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
The phenomenon of male-to-male sexual assault undoubtedly occurs, both in domestic and conflict contexts. There is a small but growing discourse supporting the analysis of this phenomenon, however it remains significantly limited and its growth disproportionate to the concerns it warrants. The international law, NGO and State actors are largely responsible for this inhibition, predominately attributable to their intent in preserving the feminist and patriarchal values on which their institutions are founded. The strength with which the feminist discourse has embedded itself into the agendas of relevant actors is obstructing attempts at unbiased analysis of gender-based violence and the development of a discourse dedicated to understanding male sexual assault. It appears to be a prevailing sector-wide perception that females are the only victims of sexual violence and that creating space for a discussion on male-sexual assault will detract worth from the feminist discourse on female sexual assault. This paper discusses the means in which the sectors ignorance towards male sexual assault manifests and the harmful implications of ignoring this phenomenon. The author uses contextual analyses from development, international law, and cultural examples.
Recent research suggests that the P3b may be closely related to the activation of the locus coeruleus-norepinephrine (LC-NE) system. To further study the potential association, we applied a novel technique, the non-invasive transcutaneous vagus nerve stimulation (tVNS), which is speculated to increase noradrenaline levels. Using a within-subject cross-over design, 20 healthy participants received continuous tVNS and sham stimulation on two consecutive days (stimulation counterbalanced across participants) while performing a visual oddball task. During stimulation, oval non-targets (standard), normal-head (easy) and rotated-head (difficult) targets, as well as novel stimuli (scenes) were presented. As an indirect marker of noradrenergic activation we also collected salivary alpha-amylase (sAA) before and after stimulation. Results showed larger P3b amplitudes for target, relative to standard stimuli, irrespective of stimulation condition. Exploratory post hoc analyses, however, revealed that, in comparison to standard stimuli, easy (but not difficult) targets produced larger P3b (but not P3a) amplitudes during active tVNS, compared to sham stimulation. For sAA levels, although main analyses did not show differential effects of stimulation, direct testing revealed that tVNS (but not sham stimulation) increased sAA levels after stimulation. Additionally, larger differences between tVNS and sham stimulation in P3b magnitudes for easy targets were associated with larger increase in sAA levels after tVNS, but not after sham stimulation. Despite preliminary evidence for a modulatory influence of tVNS on the P3b, which may be partly mediated by activation of the noradrenergic system, additional research in this field is clearly warranted. Future studies need to clarify whether tVNS also facilitates other processes, such as learning and memory, and whether tVNS can be used as therapeutic tool.
Since 2015, the European Union has struggled to deal with the influx of refugees coming into its territories. The number of institutions involved in designing a competent response approach, com-bined with the unilateral and uncoordinated state reactions, have left unclear where to look for when searching for answers and new alternatives. Can the United Nations High Commissioner for Refugees (UNHCR) take a leading role in solving this and future crises? After a brief recapitulation of the crisis, an analysis of UNHCR’s statue, relationship to international law, and doctrine will put this question to the test while exploring options that are not only available but also feasible in a system where politics trump both legality and morality. If UNHCR is to play an active role in fu-ture refugee policies and become the lead agency it once was, a new daring and innovative approach has to emerge in order to readapt to the power relations that prevail in the twenty-first century.
Reversed predator
(2018)
Ecoevolutionary feedbacks in predator–prey systems have been shown to qualitatively alter predator–prey dynamics. As a striking example, defense–offense coevolution can reverse predator–prey cycles, so predator peaks precede prey peaks rather than vice versa. However, this has only rarely been shown in either model studies or empirical systems. Here, we investigate whether this rarity is a fundamental feature of reversed cycles by exploring under which conditions they should be found. For this, we first identify potential conditions and parameter ranges most likely to result in reversed cycles by developing a new measure, the effective prey biomass, which combines prey biomass with prey and predator traits, and represents the prey biomass as perceived by the predator. We show that predator dynamics always follow the dynamics of the effective prey biomass with a classic ¼‐phase lag. From this key insight, it follows that in reversed cycles (i.e., ¾‐lag), the dynamics of the actual and the effective prey biomass must be in antiphase with each other, that is, the effective prey biomass must be highest when actual prey biomass is lowest, and vice versa. Based on this, we predict that reversed cycles should be found mainly when oscillations in actual prey biomass are small and thus have limited impact on the dynamics of the effective prey biomass, which are mainly driven by trait changes. We then confirm this prediction using numerical simulations of a coevolutionary predator–prey system, varying the amplitude of the oscillations in prey biomass: Reversed cycles are consistently associated with regions of parameter space leading to small‐amplitude prey oscillations, offering a specific and highly testable prediction for conditions under which reversed cycles should occur in natural systems.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Operating the Cloud. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI’s Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. Operating the Cloud aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In these proceedings, the results of the fifth HPI cloud symposium Operating the Cloud 2017 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2018.
Both social perception and temperament in young infants have been related to social functioning later in life. Previous functional Near-Infrared Spectroscopy (fNIRS) data (Lloyd-Fox et al., 2009) showed larger blood-oxygenation changes for social compared to non-social stimuli in the posterior temporal cortex of five-month-old infants. We sought to replicate and extend these findings by using fNIRS to study the neural basis of social perception in relation to infant temperament (Negative Affect) in 37 five-to-eight-month-old infants.
Infants watched short videos displaying either hand and facial movements of female actors (social dynamic condition) or moving toys and machinery (non-social dynamic condition), while fNIRS data were collected over temporal brain regions. Negative Affect was measured using the Infant Behavior Questionnaire.
Results showed significantly larger blood-oxygenation changes in the right posterior-temporal region in the social compared to the non-social condition. Furthermore, this differential activation was smaller in infants showing higher Negative Affect.
Our results replicate those of Lloyd-Fox et al. and confirmed that five-to-eight-month-old infants show cortical specialization for social perception. Furthermore, the decreased cortical sensitivity to social stimuli in infants showing high Negative Affect may be an early biomarker for later difficulties in social interaction.
The solar activity and its consequences affect space weather and Earth’s climate. The solar activity exhibits a cyclic behaviour with a period of about 11 years. The solar cycle properties are governed by the dynamo taking place in the interior of the Sun, and they are distinctive. Extending the knowledge about solar cycle properties into the past is essential for understanding the solar dynamo and forecasting space weather. It can be acquired through the analysis of historical sunspot drawings. Sunspots are the dark areas, which are associated with strong magnetic fields, on the solar surface. Sunspots are the oldest and longest available observed features of solar activity.
One of the longest available records of sunspot drawings is the collection by Samuel Heinrich Schwabe during 1825–1867. The sunspot sizes measured from digitized Schwabe drawings are not to scale and need to be converted into physical sunspot areas. We employed a statistical approach assuming that the area distribution of sunspots was the same in the 19th century as it was in the 20th century. Umbral areas for about 130 000 sunspots observed by Schwabe were obtained. The annually averaged sunspot areas correlate reasonably well with the sunspot number. Tilt angles and polarity separations of sunspot groups were calculated assuming them to be bipolar. There is, of course, no polarity information in the observations. We derived an average tilt angle by attempting to exclude unipolar groups with a minimum separation of the two surmised polarities and an outlier rejection method, which follows the evolution of each group and detects the moment, when it turns unipolar as it decays. As a result, the tilt angles, although displaying considerable natural scatter, are on average 5.85° ± 0.25°, with the leading
polarity located closer to the equator, in good agreement with tilt angles obtained from 20th century data sets. Sources of uncertainties in the tilt angle determination are discussed and need to be addressed whenever different data sets are combined.
Digital images of observations printed in the books Rosa Ursina and Prodromus pro sole mobili by Christoph Scheiner, as well as the drawings from Scheiner’s letters to Marcus Welser, are analyzed to obtain information on the positions and sizes of sunspots that appeared before the Maunder minimum. In most cases, the given orientation of the ecliptic is used to set up the heliographic coordinate system for the drawings. Positions and sizes are measured manually displaying the drawings on a computer screen. Very early drawings have no indication of the solar orientation. A rotational matching using common spots of adjacent days is used in some cases, while in other cases, the assumption that images were aligned with a zenith–horizon coordinate system appeared to be the most likely. In total, 8167 sunspots were measured. A distribution of sunspot latitudes versus time (butterfly diagram) is obtained for Scheiner’s observations. The observations of 1611 are very inaccurate, but the drawings of 1612 have at least an indication of the solar orientation, while the remaining part of the spot positions from 1618–1631 have good to very good accuracy. We also computed 697 tilt angles of apparent bipolar sunspot groups, which were observed in the period 1618–1631. We find that the average tilt angle of nearly 4° does not significantly differ from the 20th century values.
The solar cycle properties seem to be related to the tilt angles of sunspot groups, and it is an important parameter in the surface flux transport models. The tilt angles of bipolar sunspot groups from various historical sets of solar drawings including from Schwabe and Scheiner are analyzed. Data by Scheiner, Hevelius, Staudacher, Zucconi, Schwabe, and Spörer deliver a series of average tilt angles spanning a period of 270 years, in addition to previously found values for 20th-century data obtained by other authors. We find that the average tilt angles before the Maunder minimum were not significantly different from modern values. However, the average tilt angles of a period 50 years after the Maunder minimum, namely for cycles 0 and 1, were much lower and near zero. The typical tilt angles before the Maunder minimum suggest that abnormally low tilt angles were not responsible for driving the solar cycle into a grand minimum.
With the Schwabe (1826–1867) and Spörer (1866–1880) sunspot data, the butterfly diagram of sunspot groups extends back till 1826. A recently developed method, which separates the wings of the butterfly diagram based on the long gaps present in sunspot group occurrences at different latitudinal bands, is used to separate the wings of the butterfly diagram. The cycle-to-cycle variation in the start (F), end (L), and highest (H) latitudes of the wings with respect to the strength of the wings are analyzed. On the whole, the wings of the stronger cycles tend to start at higher latitudes and have a greater extent. The time spans of the wings and the time difference between the wings in the northern hemisphere display a quasi-periodicity of 5–6 cycles. The average wing overlap is zero in the southern hemisphere, whereas it is 2–3 months in the north. A marginally significant oscillation of about 10 solar cycles is found in the asymmetry of the L latitudes. This latest, extended database of butterfly wings provides new observational constraints, regarding the spatio-temporal distribution of sunspot occurrences over the solar cycle, to solar dynamo models.
Hot localised charge carriers on the Si(111)-7×7 surface are modelled by small charged clusters. Such resonances induce non-local desorption, i.e. more than 10 nm away from the injection site, of chlorobenzene in scanning tunnelling microscope experiments. We used such a cluster model to characterise resonance localisation and vibrational activation for positive and negative resonances recently. In this work, we investigate to which extent the model depends on details of the used cluster or quantum chemistry methods and try to identify the smallest possible cluster suitable for a description of the neutral surface and the ion resonances. Furthermore, a detailed analysis for different chemisorption orientations is performed. While some properties, as estimates of the resonance energy or absolute values for atomic changes, show such a dependency, the main findings are very robust with respect to changes in the model and/or the chemisorption geometry.
Previous research informs us about facilitators of employees’ promotive voice. Yet little is known about what determines whether a specific idea for constructive change brought up by an employee will be approved or rejected by a supervisor. Drawing on interactionist theories of motivation and personality, we propose that a supervisor will be least likely to support an idea when it threatens the supervisor’s power motive, and when it is perceived to serve the employee’s own striving for power. The prosocial versus egoistic intentions attributed to the idea presenter are proposed to mediate the latter effect. We conducted three scenario-based studies in which supervisors evaluated fictitious ideas voiced by employees that – if implemented – would have power-related consequences for them as a supervisor. Results show that the higher a supervisors’ explicit power motive was, the less likely they were to support a power-threatening idea (Study 1, N = 60). Moreover, idea support was less likely when this idea was proposed by an employee that was described as high (rather than low) on power motivation (Study 2, N = 79); attributed prosocial intentions mediated this effect. Study 3 (N = 260) replicates these results.
Climate change, along with socio-economic development, will increase the economic impacts of floods. While the factors that influence flood risk to private property have been extensively studied, the risk that natural disasters pose to public infrastructure and the resulting implications on public sector budgets, have received less attention. We address this gap by developing a two-staged model framework, which first assesses the flood risk to public infrastructure in Austria. Combining exposure and vulnerability information at the building level with inundation maps, we project an increase in riverine flood damage, which progressively burdens public budgets. Second, the risk estimates are integrated into an insurance model, which analyzes three different compensation arrangements in terms of the monetary burden they place on future governments' budgets and the respective volatility of payments. Formalized insurance compensation arrangements offer incentives for risk reduction measures, which lower the burden on public budgets by reducing the vulnerability of buildings that are exposed to flooding. They also significantly reduce the volatility of payments and thereby improve the predictability of flood damage expenditures. These features indicate that more formalized insurance arrangements are an improvement over the purely public compensation arrangement currently in place in Austria.
The movement of organisms has formed our planet like few other processes. Movements shape populations, communities, entire ecosystems, and guarantee fundamental ecosystem functions and services, like seed dispersal and pollination. Global, regional and local anthropogenic impacts influence animal movements across ecosystems all around the world. In particular, land-use modification, like habitat loss and fragmentation disrupt movements between habitats with profound consequences, from increased disease transmissions to reduced species richness and abundance. However, neither the influence of anthropogenic change on animal movement processes nor the resulting effects on ecosystems are well understood. Therefore, we need a coherent understanding of organismal movement processes and their underlying mechanisms to predict and prevent altered animal movements and their consequences for ecosystem functions.
In this thesis I aim at understanding the influence of anthropogenically caused land-use change on animal movement processes and their underlying mechanisms. In particular, I am interested in the synergistic influence of large-scale landscape structure and fine-scale habitat features on basic-level movement behaviours (e.g. the daily amount of time spend running, foraging, and resting) and their emerging higher-level movements (home range formation). Based on my findings, I identify the likely consequences of altered animal movements that lead to the loss of species richness and abundances.
The study system of my thesis are hares in agricultural landscapes. European brown hares (Lepus europaeus) are perfectly suited to study animal movements in agricultural landscapes, as hares are hermerophiles and prefer open habitats. They have historically thrived in agricultural landscapes, but their numbers are in decline. Agricultural areas are undergoing strong land-use changes due to increasing food demand and fast developing agricultural technologies. They are already the largest land-use class, covering 38% of the world’s terrestrial surface. To consider the relevance of a given landscape structure for animal movement behaviour I selected two differently structured agricultural landscapes – a simple landscape in Northern Germany with large fields and few landscape elements (e.g. hedges and tree stands), and a complex landscape in Southern Germany with small fields and many landscape elements.
I applied GPS devices (hourly fixes) with internal high-resolution accelerometers (4 min samples) to track hares, receiving an almost continuous observation of the animals’ behaviours via acceleration analyses. I used the spatial and behavioural information in combination with remote sensing data (normalized difference vegetation index, or NDVI, a proxy for resource availability), generating an almost complete idea of what the animal was doing when, why and where. Apart from landscape structure (represented by the two differently structured study areas), I specifically tested whether the following fine-scale habitat features influence animal movements: resource, agricultural management events, habitat diversity, and habitat structure.
My results show that, irrespective of the movement process or mechanism and the type of fine-scale habitat features, landscape structure was the overarching variable influencing hare movement behaviour. High resource variability forces hares to enlarge their home ranges, but only in the simple and not in the complex landscape. Agricultural management events result in home range shifts in both landscapes, but force hares to increase their home ranges only in the simple landscape. Also the preference of habitat patches with low vegetation and the avoidance of high vegetation, was stronger in the simple landscape. High and dense crop fields restricted hare movements temporarily to very local and small habitat patch remnants. Such insuperable barriers can separate habitat patches that were previously connected by mobile links. Hence, the transport of nutrients and genetic material is temporarily disrupted. This mechanism is also working on a global scale, as human induced changes from habitat loss and fragmentation to expanding monocultures cause a reduction in animal movements worldwide.
The mechanisms behind those findings show that higher-level movements, like increasing home ranges, emerge from underlying basic-level movements, like the behavioural modes. An increasing landscape simplicity first acts on the behavioural modes, i.e. hares run and forage more, but have less time to rest. Hence, the emergence of increased home range sizes in simple landscapes is based on an increased proportion of time running and foraging, largely due to longer travelling times between distant habitats and scarce resource items in the landscape. This relationship was especially strong during the reproductive phase, demonstrating the importance of high-quality habitat for reproduction and the need to keep up self-maintenance first, in low quality areas. These changes in movement behaviour may release a cascade of processes that start with more time being allocated to running and foraging, resulting into an increased energy expenditure and may lead to a decline in individual fitness. A decrease in individual fitness and reproductive output will ultimately affect population viability leading to local extinctions.
In conclusion, I show that landscape structure has one of the most important effects on hare movement behaviour. Synergistic effects of landscape structure, and fine-scale habitat features, first affect and modify basic-level movement behaviours, that can scales up to altered higher-level movements and may even lead to the decline of species richness and abundances, and the disruption of ecosystem functions. Understanding the connection between movement mechanisms and processes can help to predict and prevent anthropogenically induced changes in movement behaviour. With regard to the paramount importance of landscape structure, I strongly recommend to decrease the size of agricultural fields and increase crop diversity. On the small-scale, conservation policies should assure the year round provision of areas with low vegetation height and high quality forage. This could be done by generating wildflower strips and additional (semi-) natural habitat patches. This will not only help to increase the populations of European brown hares and other farmland species, but also ensure and protects the continuity of mobile links and their intrinsic value for sustaining important ecosystem functions and services.
In its Burmych and Others v. Ukraine judgment of October 2017 the European Court of Human Rights has dismissed more than 12.000 applications due to the fact that given that they were not only repetitive in nature, but also mutatis mutandis identical to applications covered by a previous pilot judgment rendered against Ukraine. This raises fundamental issues as to the role of the Court within the human rights protection system established by the ECHR, as well as those concerning the interrelationship between the Court and the Committee of Ministers.
Flooding is an imminent natural hazard threatening most river deltas, e.g. the Mekong Delta. An appropriate flood management is thus required for a sustainable development of the often densely populated regions. Recently, the traditional event-based hazard control shifted towards a risk management approach in many regions, driven by intensive research leading to new legal regulation on flood management. However, a large-scale flood risk assessment does not exist for the Mekong Delta. Particularly, flood risk to paddy rice cultivation, the most important economic activity in the delta, has not been performed yet. Therefore, the present study was developed to provide the very first insight into delta-scale flood damages and risks to rice cultivation. The flood hazard was quantified by probabilistic flood hazard maps of the whole delta using a bivariate extreme value statistics, synthetic flood hydrographs, and a large-scale hydraulic model. The flood risk to paddy rice was then quantified considering cropping calendars, rice phenology, and harvest times based on a time series of enhanced vegetation index (EVI) derived from MODIS satellite data, and a published rice flood damage function. The proposed concept provided flood risk maps to paddy rice for the Mekong Delta in terms of expected annual damage. The presented concept can be used as a blueprint for regions facing similar problems due to its generic approach. Furthermore, the changes in flood risk to paddy rice caused by changes in land use currently under discussion in the Mekong Delta were estimated. Two land-use scenarios either intensifying or reducing rice cropping were considered, and the changes in risk were presented in spatially explicit flood risk maps. The basic risk maps could serve as guidance for the authorities to develop spatially explicit flood management and mitigation plans for the delta. The land-use change risk maps could further be used for adaptive risk management plans and as a basis for a cost-benefit of the discussed land-use change scenarios. Additionally, the damage and risks maps may support the recently initiated agricultural insurance programme in Vietnam.
In two-dimensional reaction-diffusion systems, local curvature perturbations on traveling waves are typically damped out and vanish. However, if the inhibitor diffuses much faster than the activator, transversal instabilities can arise, leading from flat to folded, spatio-temporally modulated waves and to spreading spiral turbulence. Here, we propose a scheme to induce or inhibit these instabilities via a spatio-temporal feedback loop. In a piecewise-linear version of the FitzHugh-Nagumo model, transversal instabilities and spiral turbulence in the uncontrolled system are shown to be suppressed in the presence of control, thereby stabilizing plane wave propagation. Conversely, in numerical simulations with the modified Oregonator model for the photosensitive Belousov-Zhabotinsky reaction, which does not exhibit transversal instabilities on its own, we demonstrate the feasibility of inducing transversal instabilities and study the emerging wave patterns in a well-controlled manner.
Hazards and accessibility
(2018)
The assessment of natural hazards and risk has traditionally been built upon the estimation of threat maps, which are used to depict potential danger posed by a particular hazard throughout a given area. But when a hazard event strikes, infrastructure is a significant factor that can determine if the situation becomes a disaster. The vulnerability of the population in a region does not only depend on the area’s local threat, but also on the geographical accessibility of
the area. This makes threat maps by themselves insufficient for supporting real-time decision-making, especially for those tasks that involve the use of the road network, such as management of relief operations, aid distribution, or planning of evacuation routes, among others. To overcome this problem, this paper proposes a multidisciplinary approach divided in two parts. First, data fusion of satellite-based threat data and open infrastructure data from OpenStreetMap, introducing a threat-based routing service. Second, the visualization of this data through cartographic generalization and schematization. This emphasizes critical areas along roads in a simple way and allows users to visually evaluate the impact natural hazards may have on infrastructure. We develop and illustrate this methodology with a case study of landslide threat for an area in Colombia.
Objective: Several different measures of heart rate variability, and particularly of respiratory sinus arrhythmia, are widely used in research and clinical applications. For many purposes it is important to know which features of heart rate variability are directly related to respiration and which are caused by other aspects of cardiac dynamics. Approach: Inspired by ideas from the theory of coupled oscillators, we use simultaneous measurements of respiratory and cardiac activity to perform a nonlinear disentanglement of the heart rate variability into the respiratory-related component and the rest. Main results: The theoretical consideration is illustrated by the analysis of 25 data sets from healthy subjects. In all cases we show how the disentanglement is manifested in the different measures of heart rate variability. Significance: The suggested technique can be exploited as a universal preprocessing tool, both for the analysis of respiratory influence on the heart rate and in cases when effects of other factors on the heart rate variability are in focus.
Signals stored in sediment
(2018)
Tectonic and climatic boundary conditions determine the amount and the characteristics (size distribution and composition) of sediment that is generated and exported from mountain regions. On millennial timescales, rivers adjust their morphology such that the incoming sediment (Qs,in) can be transported downstream by the available water discharge (Qw). Changes in climatic and tectonic boundary conditions thus trigger an adjustment of the downstream river morphology. Understanding the sensitivity of river morphology to perturbations in boundary conditions is therefore of major importance, for example, for flood assessments, infrastructure and habitats. Although we have a general understanding of how rivers evolve over longer timescales, the prediction of channel response to changes in boundary conditions on a more local scale and over shorter timescales remains a major challenge. To better predict morphological channel evolution, we need to test (i) how channels respond to perturbations in boundary conditions and (ii) how signals reflecting the persisting conditions are preserved in sediment characteristics. This information can then be applied to reconstruct how local river systems have evolved over time.
In this thesis, I address those questions by combining targeted field data collection in the Quebrada del Toro (Southern Central Andes of NW Argentina) with cosmogenic nuclide analysis and remote sensing data. In particular, I (1) investigate how information on hillslope processes is preserved in the 10Be concentration (geochemical composition) of fluvial sediments and how those signals are altered during downstream transport. I complement the field-based approach with physical experiments in the laboratory, in which I (2) explore how changes in sediment supply (Qs,in) or water discharge (Qw) generate distinct signals in the amount of sediment discharge at the basin outlet (Qs,out). With the same set of experiments, I (3) study the adjustments of alluvial channel morphology to changes in Qw and Qs,in, with a particular focus in fill-terrace formation. I transfer the findings from the experiments to the field to (4) reconstruct the evolution of a several-hundred meter thick fluvial fill-terrace sequence in the Quebrada del Toro. I create a detailed terrace chronology and perform reconstructions of paleo-Qs and Qw from the terrace deposits. In the following paragraphs, I summarize my findings on each of these four topics.
First, I sampled detrital sediment at the outlet of tributaries and along the main stem in the Quebrada del Toro, analyzed their 10Be concentration ([10Be]) and compared the data to a detailed hillslope-process inventory. The often observed non-linear increase in catchment-mean denudation rate (inferred from [10Be] in fluvial sediment) with catchment-median slope, which has commonly been explained by an adjustment in landslide-frequency, coincided with a shift in the main type of hillslope processes. In addition, the [10Be] in fluvial sediments varied with grain-size. I defined the normalized sand-gravel-index (NSGI) as the 10Be-concentration difference between sand and gravel fractions divided by their summed concentrations. The NSGI increased with median catchment slope and coincided with a shift in the prevailing hillslope processes active in the catchments, thus making the NSGI a potential proxy for the evolution of hillslope processes over time from sedimentary deposits. However, the NSGI recorded hillslope-processes less well in regions of reduced hillslope-channel connectivity and, in addition, has the potential to be altered during downstream transport due to lateral sediment input, size-selective sediment transport and abrasion.
Second, my physical experiments revealed that sediment discharge at the basin outlet (Qs,out) varied in response to changes in Qs,in or Qw. While changes in Qw caused a distinct signal in Qs,out during the transient adjustment phase of the channel to new boundary conditions, signals related to changes in Qs,in were buffered during the transient phase and likely only become apparent once the channel is adjusted to the new conditions. The temporal buffering is related to the negative feedback between Qs,in and channel-slope adjustments. In addition, I inferred from this result that signals extracted from the geochemical composition of sediments (e.g., [10Be]) are more likely to represent modern-day conditions during times of aggradation, whereas the signal will be temporally buffered due to mixing with older, remobilized sediment during times of channel incision.
Third, the same set of experiments revealed that river incision, channel-width narrowing and terrace cutting were initiated by either an increase in Qw, a decrease in Qs,in or a drop in base level. The lag-time between the external perturbation and the terrace cutting determined (1) how well terrace surfaces preserved the channel profile prior to perturbation and (2) the degree of reworking of terrace-surface material. Short lag-times and well preserved profiles occurred in cases with a rapid onset of incision. Also, lag-times were synchronous along the entire channel after upstream perturbations (Qw, Qs,in), whereas base-level fall triggered an upstream migrating knickzone, such that lag-times increased with distance upstream. Terraces formed after upstream perturbations (Qw, Qs,in) were always steeper when compared to the active channel in new equilibrium conditions. In the base-level fall experiment, the slope of the terrace-surfaces and the modern channel were similar. Hence, slope comparisons between the terrace surface and the modern channel can give insights into the mechanism of terrace formation.
Fourth, my detailed terrace-formation chronology indicated that cut-and-fill episodes in the Quebrada del Toro followed a ~100-kyr cyclicity, with the oldest terraces ~ 500 kyr old. The terraces were formed due to variability in upstream Qw and Qs. Reconstructions of paleo-Qs over the last 500 kyr, which were restricted to times of sediment deposition, indicated only minor (up to four-fold) variations in paleo-denudation rates. Reconstructions of paleo-Qw were limited to the times around the onset of river incision and revealed enhanced discharge from 10 to 85% compared to today. Such increases in Qw are in agreement with other quantitative paleo-hydrological reconstructions from the Eastern Andes, but have the advantage of dating further back in time.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
PaRDeS, the journal of the German Association for Jewish Studies, aims at exploring the fruitful and multifarious cultures of Judaism as well as their relations to their environment within diverse areas of research. In addition, the journal promotes Jewish Studies within academic discourse and reflects on its historic and social responsibilities.
Due to the lack of acceptance of Wissenschaft des Judentums in academia, modern Jewish scholarship in the nineteenth century organized itself along networks of institutions such as rabbinical seminaries, contacts with related disciplines like Oriental Studies, and personal relationships. This last pathway of communication was essential for the cohesion of modern Jewish scholarship. Therefore, my essay portrays the correspondence between David Kaufmann and Leopold Zunz as an example of this channel of communication. By analyzing the exchange of letters and personal encounters between the two scholars, particular attention will be paid to the following questions: How were the letters transmitted until today? What were the main topics of the correspondence between these representatives of two generations of Wissenschaft des Judentums? Which were the positions of Kaufmann and Zunz towards the present and future of modern Jewish scholarship? How did Kaufmann become the first biographer of Zunz?
Ferroic materials have attracted a lot of attention over the years due to their wide range of applications in sensors, actuators, and memory devices. Their technological applications originate from their unique properties such as ferroelectricity and piezoelectricity. In order to optimize these materials, it is necessary to understand the coupling between their nanoscale structure and transient response, which are related to the atomic structure of the unit cell.
In this thesis, synchrotron X-ray diffraction is used to investigate the structure of ferroelectric thin film capacitors during application of a periodic electric field. Combining electrical measurements with time-resolved X-ray diffraction on a working device allows for visualization of the interplay between charge flow and structural motion. This constitutes the core of this work. The first part of this thesis discusses the electrical and structural dynamics of a ferroelectric Pt/Pb(Zr0.2,Ti0.8)O3/SrRuO3 heterostructure during charging, discharging, and polarization reversal. After polarization reversal a non-linear piezoelectric response develops on a much longer time scale than the RC time constant of the device. The reversal process is inhomogeneous and induces a transient disordered domain state. The structural dynamics under sub-coercive field conditions show that this disordered domain state can be remanent and can be erased with an appropriate voltage pulse sequence. The frequency-dependent dynamic characterization of a Pb(Zr0.52,Ti0.48)O3 layer, at the morphotropic phase boundary, shows that at high frequency, the limited domain wall velocity causes a phase lag between the applied field and both the structural and electrical responses. An external modification of the RC time constant of the measurement delays the switching current and widens the electromechanical hysteresis loop while achieving a higher compressive piezoelectric strain within the crystal.
In the second part of this thesis, time-resolved reciprocal space maps of multiferroic BiFeO3 thin films were measured to identify the domain structure and investigate the development of an inhomogeneous piezoelectric response during the polarization reversal. The presence of 109° domains is evidenced by the splitting of the Bragg peak.
The last part of this work investigates the effect of an optically excited ultrafast strain or heat pulse propagating through a ferroelectric BaTiO3 layer, where we observed an additional current response due to the laser pulse excitation of the metallic bottom electrode of the heterostructure.
High-throughput sequence data retrieved from ancient or other degraded samples has led to unprecedented insights into the evolutionary history of many species, but the analysis of such sequences also poses specific computational challenges. The most commonly used approach involves mapping sequence reads to a reference genome. However, this process becomes increasingly challenging with an elevated genetic distance between target and reference or with the presence of contaminant sequences with high sequence similarity to the target species. The evaluation and testing of mapping efficiency and stringency are thus paramount for the reliable identification and analysis of ancient sequences. In this paper, we present ‘TAPAS’, (Testing of Alignment Parameters for Ancient Samples), a computational tool that enables the systematic testing of mapping tools for ancient data by simulating sequence data reflecting the properties of an ancient dataset and performing test runs using the mapping software and parameter settings of interest. We showcase TAPAS by using it to assess and improve mapping strategy for a degraded sample from a banded linsang (Prionodon linsang), for which no closely related reference is currently available. This enables a 1.8-fold increase of the number of mapped reads without sacrificing mapping specificity. The increase of mapped reads effectively reduces the need for additional sequencing, thus making more economical use of time, resources, and sample material.
Synthesis, assembly and thermo-responsivity of polymer-functionalized magnetic cobalt nanoparticles
(2018)
This thesis mainly covers the synthesis, surface modification, magnetic-field-induced assembly and thermo-responsive functionalization of superparamagnetic Co NPs initially stabilized by hydrophobic small molecules oleic acid (OA) and trioctylphosphine oxide (TOPO), as well as the synthesis of both superparamagnetic and ferromagnetic Co NPs by using end-functionalized-polystyrene as stabilizer.
Co NPs, due to their excellent magnetic and catalytic properties, have great potential application in various fields, such as ferrofluids, catalysis, and magnetic resonance imaging (MRI). Superparamagnetic Co NPs are especially interesting, since they exhibit zero coercivity. They get magnetized in an external magnetic field and reach their saturation magnetization rapidly, but no magnetic moment remains after removal of the applied magnetic field. Therefore, they do not agglomerate in the body when they are used in biomedical applications. Normally, decomposition of metallic precursors at high temperature is one of the most important methods in preparation of monodisperse magnetic NPs, providing tunability in size and shape. Hydrophobic ligands like OA, TOPO and oleylamine are often used to both control the growth of NPs and protect them from agglomeration. The as-prepared magnetic NPs can be used in biological applications as long as they are transferred into water. Moreover, their supercrystal assemblies have the potential for high density data storage and electronic devices. In addition to small molecules, polymers can also be used as surfactants for the synthesis of ferromagnetic and superparamagnetic NPs by changing the reaction conditions. Therefore, chapter 2 gives an overview on the basic concept of synthesis, surface modification and self-assembly of magnetic nanoparticles. Various examples were used to illustrate the recent work.
The hydrophobic Co NPs synthesized with small molecules as surfactants limit their biological applications, which require a hydrophilic or aqueous environment. Surface modification (e.g., ligand exchange) is a general idea for either phase transition or surface-functionalization. Therefore, in chapter 3, a ligand exchange process was conducted to functionalize the surface of Co NPs. PNIPAM is one of the most popular smart polymers and its lower critical solution temperature (LCST) is around 32 °C, with a reversible change in the conformation structure between hydrophobic and hydrophilic. The novel nanocomposites of superparamagnetic Co NPs and thermo-responsive PNIPAM are of great interest. Thus, well-defined superparamagnetic Co NPs were firstly synthesized through the thermolysis of cobalt carbonyl by using OA and TOPO as surfactants. A functional ATRP initiator, containing an amine (as anchoring group) and a 2-bromopropionate group (SI-ATRP initiator), was used to replace the original ligands. This process is rapid and facial for efficient surface functionalization and afterwards the Co NPs can be dispersed into polar solvent DMF without aggregation. FT-IR spectroscopy showed that the TOPO was completely replaced, but a small amount of OA remained on the surface. A TGA measurement allowed the calculation of the grafting density of the initiator as around 3.2 initiator/nm2. Then, the surface-initiated ATRP was conducted for the polymerization of NIPAM on the surface of Co NPs and rendered the nanocomposites water-dispersible. A temperature-dependent dynamic light scattering study showed the aggregation behavior of PNIPAM-coated Co NPs upon heating and this process was proven to be reversible. The combination of superparamagnetic and thermo-responsive properties in these hybrid nanoparticles is promising for future applications e.g. in biomedicine.
In chapter 4, the magnetic-field-induced assembly of superparamagnetic cobalt nanoparticles both on solid substrates and at liquid-air interface was investigated. OA- and TOPO-coated Co NPs were synthesized via the thermolysis of cobalt carbonyl and dispersed into either hexane or toluene. The Co NP dispersion was dropped onto substrates (e.g., TEM grid, silicon wafer) and at liquid-air (water-air or ethylene glycol-air) interface. Due to the attractive dipolar interaction, 1-D chains formed in the presence of an external magnetic field. It is known that the concentration and the strength of the magnetic field can affect the assembly behavior of superparamagnetic Co NPs. Therefore, the influence of these two parameters on the morphology of the assemblies was studied. The formed 1-D chains were shorter and flexible at either lower concentration of the Co NP dispersion or lower strength of the external magnetic field due to thermal fluctuation. However, by increasing either the concentration of the NP dispersion or the strength of the applied magnetic field, these chains became longer, thicker and straighter. The reason could be that a high concentration led to a high fraction of short dipolar chains, and their interaction resulted in longer and thicker chains under applied magnetic field. On the other hand, when the magnetic field increased, the induced moments of the magnetic nanoparticles became larger, which dominated over the thermal fluctuation. Thus, the formed short chains connected to each other and grew in length. Thicker chains were also observed through chain-chain interaction. Furthermore, the induced moments of the NPs tended to direct into one direction with increased magnetic field, thus the chains were straighter. In comparison between the assembly on substrates, at water-air interface and at ethylene glycol-air interface, the assembly of Co NPs in hexane dispersion at ethylene glycol-air interface showed the most regular and homogeneous chain structures due to the better spreading of the dispersion on ethylene glycol subphase than on water subphase and substrates. The magnetic-field-induced assembly of superparamagnetic nanoparticles could provide a powerful approach for applications in data storage and electronic devices.
Chapter 5 presented the synthesis of superparamagnetic and ferromagnetic cobalt nanoparticles through a dual-stage thermolysis of cobalt carbonyl (Co2(CO)8) by using polystyrene as surfactant. The amine end-functionalized polystyrene surfactants with different molecular weight were prepared via atom transfer radical polymerization technique. The molecular weight determination of polystyrene was conducted by gel permeation chromatography (GPC) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-ToF) mass spectrometry techniques. The results showed that, when the molecular weight distribution is low (Mw/Mn < 1.2), the measurement by GPC and MALDI-ToF MS provided nearly similar results. For example, the molecular weight of 10600 Da was obtained by MALDI-ToF MS, while GPC gave 10500 g/mol (Mw/Mn = 1.17). However, if the polymer is poly distributed, MALDI-ToF MS cannot provide an accurate value. This was exemplified for a polymer with a molecular weight of 3130 Da measured by MALDI-TOF MS, while GPC showed 2300 g/mol (Mw/Mn = 1.38). The size, size distribution and magnetic properties of the hybrid particles were different by changing either the molecular weight or concentration of the polymer surfactants. The analysis from TEM characterization showed that the size of cobalt nanoparticles stabilized with polystyrene of lower molecular weight (Mn = 2300 g/mol) varied from 12–22 nm, while the size with middle (Mn = 4500 g/mol) and higher molecular weight (Mn = 10500 g/mol) of polystyrene-coated cobalt nanoparticles showed little change. Magnetic measurements exhibited that the small cobalt particles (12 nm) were superparamagnetic, while larger particles (21 nm) were ferromagnetic and assembled into 1-D chains. The grafting density calculated from thermogravimetric analysis showed that a higher grafting density of polystyrene was obtained with lower molecular weight (Mn = 2300 g/mol) than those with higher molecular weight (Mn = 10500 g/mol). Due to the larger steric hindrance, polystyrene with higher molecular weight cannot form a dense shell on the surface of the nanoparticles, which resulted in a lower grafting density. Wide angle X-ray scattering measurements revealed the epsilon cobalt crystalline phases of both superparamagnetic Co NPs coated with polystyrene (Mn = 2300 g/mol) and ferromagnetic Co NPs coated with polystyrene (Mn = 10500 g/mol). Furthermore, a stability study showed that PS-Co NPs prepared with higher polymer concentration and polymer molecular weight exhibited a better stability.
Although hydrologic models provide hypothesis testing of complex dynamics occurring at catchments, fresh-water quality modeling is still incipient at many subtropical headwaters. In Brazil, a few modeling studies assess freshwater nutrients, limiting policies on hydrologic ecosystem services. This paper aims to compare freshwater quality scenarios under different land-use and land-cover (LULC) change, one of them related to ecosystem-based adaptation (EbA), in Brazilian headwaters. Using the spatially semi-distributed Soil and Water Assessment Tool (SWAT) model, nitrate, total phosphorous (TP) and sediment were modeled in catchments ranging from 7.2 to 1037 km(2). These head-waters were eligible areas of the Brazilian payment for ecosystem services (PES) projects in the Cantareira water supply system, which had supplied water to 9 million people in the Sao Paulo metropolitan region (SPMR). We considered SWAT modeling of three LULC scenarios: (i) recent past scenario (S1), with historical LULC in 1990; (ii) current land-use scenario (S2), with LULC for the period 2010-2015 with field validation; and (iii) future land-use scenario with PES (S2 + EbA). This latter scenario proposed forest cover restoration through EbA following the river basin plan by 2035. These three LULC scenarios were tested with a selected record of rainfall and evapotranspiration observed in 2006-2014, with the occurrence of extreme droughts. To assess hydrologic services, we proposed the hydrologic service index (HSI), as a new composite metric comparing water pollution levels (WPL) for reference catchments, related to the grey water footprint (greyWF) and water yield. On the one hand, water quality simulations allowed for the regionalization of greyWF at spatial scales under LULC scenarios. According to the critical threshold, HSI identified areas as less or more sustainable catchments. On the other hand, conservation practices simulated through the S2 + EbA scenario envisaged not only additional and viable best management practices (BMP), but also preventive decision-making at the headwaters of water supply systems.
Objective: The aim of the present study was to examine the effect of Cold Water Immersion (CWI) on the recovery of physical performance, hematological stress markers and perceived wellness (i.e., Hooper scores) following a simulated Mixed Martial Arts (MMA) competition.
Methods: Participants completed two experimental sessions in a counter-balanced order (CWI or passive recovery for control condition: CON), after a simulated MMAs competition (3 x 5-min MMA rounds separated by 1-min of passive rest). During CWI, athletes were required to submerge their bodies, except the trunk, neck and head, in the seated position in a temperature-controlled bath (similar to 10 degrees C) for 15-min. During CON, athletes were required to be in a seated position for 15-min in same room ambient temperature. Venous blood samples (creatine kinase, cortisol, and testosterone concentrations) were collected at rest (PRE-EX, i.e., before MMAs), immediately following MMAs (POST-EX), immediately following recovery (POST-R) and 24 h post MMAs (POST-24), whilst physical fitness (squat jump, countermovement-jump and 5- and 10-m sprints) and perceptual measures (well-being Hooper index: fatigue, stress, delayed onset muscle soreness (DOMS), and sleep) were collected at PRE-EX, POST-R and POST-24, and at PRE-EX and POST-24, respectively.
Results: The main results indicate that POST-R sprint (5- and 10-m) performances were 'likely to very likely' (d = 0.64 and 0.65) impaired by prior CWI. However, moderate improvements were in 10-m sprint performance were 'likely' evident at POST-24 after CWI compared with CON (d = 0.53). Additionally, the use of CWI 'almost certainly' resulted in a large overall improvement in Hooper scores (d = 1.93). Specifically, CWI 'almost certainly' resulted in improved sleep quality (d = 1.36), stress (d = 1.56) and perceived fatigue (d = 1.51), and 'likely' resulted in a moderate decrease in DOMS (d = 0.60).
Conclusion: The use of CWI resulted in an enhanced recovery of 10-m sprint performance, as well as improved perceived wellness 24-h following simulated MMA competition.
Losses due to floods have dramatically increased over the past decades, and losses of companies, comprising direct and indirect losses, have a large share of the total economic losses. Thus, there is an urgent need to gain more quantitative knowledge about flood losses, particularly losses caused by business interruption, in order to mitigate the economic loss of companies. However, business interruption caused by floods is rarely assessed because of a lack of sufficiently detailed data. A survey was undertaken to explore processes influencing business interruption, which collected information on 557 companies affected by the severe flood in June 2013 in Germany. Based on this data set, the study aims to assess the business interruption of directly affected companies by means of a Random Forests model. Variables that influence the duration and costs of business interruption were identified by the variable importance measures of Random Forests. Additionally, Random Forest-based models were developed and tested for their capacity to estimate business interruption duration and associated costs. The water level was found to be the most important variable influencing the duration of business interruption. Other important variables, relating to the estimation of business interruption duration, are the warning time, perceived danger of flood recurrence and inundation duration. In contrast, the amount of business interruption costs is strongly influenced by the size of the company, as assessed by the number of employees, emergency measures undertaken by the company and the fraction of customers within a 50 km radius. These results provide useful information and methods for companies to mitigate their losses from business interruption. However, the heterogeneity of companies is relatively high, and sector-specific analyses were not possible due to the small sample size. Therefore, further sector-specific analyses on the basis of more flood loss data of companies are recommended.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
Fluvial terraces, floodplains, and alluvial fans are the main landforms to store sediments and to decouple hillslopes from eroding mountain rivers. Such low-relief landforms are also preferred locations for humans to settle in otherwise steep and poorly accessible terrain. Abundant water and sediment as essential sources for buildings and infrastructure make these areas amenable places to live at. Yet valley floors are also prone to rare and catastrophic sedimentation that can overload river systems by abruptly increasing the volume of sediment supply, thus causing massive floodplain aggradation, lateral channel instability, and increased flooding. Some valley-fill sediments should thus record these catastrophic sediment pulses, allowing insights into their timing, magnitude, and consequences.
This thesis pursues this theme and focuses on a prominent ~150 km2 valley fill in the Pokhara Valley just south of the Annapurna Massif in central Nepal. The Pokhara Valley is conspicuously broad and gentle compared to the surrounding dissected mountain terrain,
and is filled with locally more than 70 m of clastic debris. The area’s main river, Seti Khola, descends from the Annapurna Sabche Cirque at 3500-4500 m asl down to 900 m asl where it incises into this valley fill. Humans began to settle on this extensive
fan surface in the 1750’s when the Trans-Himalayan trade route connected the Higher Himalayas, passing Pokhara city, with the subtropical lowlands of the Terai. High and unstable river terraces and steep gorges undermined by fast flowing rivers with highly seasonal (monsoon-driven) discharge, a high earthquake risk, and a growing population make the Pokhara Valley an ideal place to study the recent geological and geomorphic history of its sediments and the implication for natural hazard appraisals.
The objective of this thesis is to quantify the timing, the sedimentologic and geomorphic processes as well as the fluvial response to a series of strong sediment pulses. I report
diagnostic sedimentary archives, lithofacies of the fan terraces, their geochemical provenance, radiocarbon-age dating and the stratigraphic relationship between them. All these various and independent lines of evidence show consistently that multiple sediment pulses filled the Pokhara Valley in medieval times, most likely in connection with, if not triggered by, strong seismic ground shaking. The geomorphic and sedimentary evidence is
consistent with catastrophic fluvial aggradation tied to the timing of three medieval Himalayan earthquakes in ~1100, 1255, and 1344 AD. Sediment provenance and calibrated radiocarbon-age data are the key to distinguish three individual sediment pulses, as these are not evident from their sedimentology alone. I explore various measures of adjustment and fluvial response of the river system following these massive aggradation pulses. By using proxies such as net volumetric erosion, incision and erosion rates, clast provenance on active river banks, geomorphic markers such as re-exhumed tree trunks in growth position, and knickpoint locations in tributary valleys, I estimate the response of the river network in the Pokhara Valley to earthquake disturbance over several centuries. Estimates of the removed volumes since catastrophic valley filling began, require average net sediment
yields of up to 4200 t km−2 yr−1 since, rates that are consistent with those reported for Himalayan rivers. The lithological composition of active channel-bed load differs from that of local bedrock material, confirming that rivers have adjusted 30-50% depending on data of different tributary catchments, locally incising with rates of 160-220 mm yr−1. In many tributaries to the Seti Khola, most of the contemporary river loads come from a Higher Himalayan source, thus excluding local hillslopes as sources. This imbalance in sediment provenance emphasizes how the medieval sediment pulses must have rapidly traversed up to 70 km downstream to invade the downstream reaches of the tributaries
up to 8 km upstream, thereby blocking the local drainage and thus reinforcing, or locally creating new, floodplain lakes still visible in the landscape today.
Understanding the formation, origin, mechanism and geomorphic processes of this valley fill is crucial to understand the landscape evolution and response to catastrophic sediment pulses. Several earthquake-triggered long-runout rock-ice avalanches or catastrophic dam burst in the Higher Himalayas are the only plausible mechanisms to explain both the geomorphic and sedimentary legacy that I document here. In any case, the Pokhara Valley was most likely hit by a cascade of extremely rare processes over some two centuries starting in the early 11th century. Nowhere in the Himalayas do we find valley fills of
comparable size and equally well documented depositional history, making the Pokhara Valley one of the most extensively dated valley fill in the Himalayas to date. Judging from the growing record of historic Himalayan earthquakes in Nepal that were traced and
dated in fault trenches, this thesis shows that sedimentary archives can be used to directly aid reconstructions and predictions of both earthquake triggers and impacts from a sedimentary-response perspective. The knowledge about the timing, evolution, and response of the Pokhara Valley and its river system to earthquake triggered sediment pulses is important to address the seismic and geomorphic risk for the city of Pokhara. This
thesis demonstrates how geomorphic evidence on catastrophic valley infill can help to independently verify paleoseismological fault-trench records and may initiate re-thinking on post-seismic hazard assessments in active mountain regions.
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
TerraSAR-X time series fill a gap in spaceborne snowmelt monitoring of small Arctic catchments
(2018)
The timing of snowmelt is an important turning point in the seasonal cycle of small Arctic catchments. The TerraSAR-X (TSX) satellite mission is a synthetic aperture radar system (SAR) with high potential to measure the high spatiotemporal variability of snow cover extent (SCE) and fractional snow cover (FSC) on the small catchment scale. We investigate the performance of multi-polarized and multi-pass TSX X-Band SAR data in monitoring SCE and FSC in small Arctic tundra catchments of Qikiqtaruk (Herschel Island) off the Yukon Coast in the Western Canadian Arctic. We applied a threshold based segmentation on ratio images between TSX images with wet snow and a dry snow reference, and tested the performance of two different thresholds. We quantitatively compared TSX- and Landsat 8-derived SCE maps using confusion matrices and analyzed the spatiotemporal dynamics of snowmelt from 2015 to 2017 using TSX, Landsat 8 and in situ time lapse data. Our data showed that the quality of SCE maps from TSX X-Band data is strongly influenced by polarization and to a lesser degree by incidence angle. VH polarized TSX data performed best in deriving SCE when compared to Landsat 8. TSX derived SCE maps from VH polarization detected late lying snow patches that were not detected by Landsat 8. Results of a local assessment of TSX FSC against the in situ data showed that TSX FSC accurately captured the temporal dynamics of different snow melt regimes that were related to topographic characteristics of the studied catchments. Both in situ and TSX FSC showed a longer snowmelt period in a catchment with higher contributions of steep valleys and a shorter snowmelt period in a catchment with higher contributions of upland terrain. Landsat 8 had fundamental data gaps during the snowmelt period in all 3 years due to cloud cover. The results also revealed that by choosing a positive threshold of 1 dB, detection of ice layers due to diurnal temperature variations resulted in a more accurate estimation of snow cover than a negative threshold that detects wet snow alone. We find that TSX X-Band data in VH polarization performs at a comparable quality to Landsat 8 in deriving SCE maps when a positive threshold is used. We conclude that TSX data polarization can be used to accurately monitor snowmelt events at high temporal and spatial resolution, overcoming limitations of Landsat 8, which due to cloud related data gaps generally only indicated the onset and end of snowmelt.
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
There is evidence for cortical contribution to the regulation of human postural control. Interference from concurrently performed cognitive tasks supports this notion, and the lateral prefrontal cortex (lPFC) has been suggested to play a prominent role in the processing of purely cognitive as well as cognitive-postural dual tasks. The degree of cognitive-motor interference varies greatly between individuals, but it is unresolved whether individual differences in the recruitment of specific lPFC regions during cognitive dual tasking are associated with individual differences in cognitive-motor interference. Here, we investigated inter-individual variability in a cognitive-postural multitasking situation in healthy young adults (n = 29) in order to relate these to inter-individual variability in lPFC recruitment during cognitive multitasking. For this purpose, a oneback working memory task was performed either as single task or as dual task in order to vary cognitive load. Participants performed these cognitive single and dual tasks either during upright stance on a balance pad that was placed on top of a force plate or during fMRI measurement with little to no postural demands. We hypothesized dual one-back task performance to be associated with lPFC recruitment when compared to single one-back task performance. In addition, we expected individual variability in lPFC recruitment to be associated with postural performance costs during concurrent dual one-back performance. As expected, behavioral performance costs in postural sway during dual-one back performance largely varied between individuals and so did lPFC recruitment during dual one-back performance. Most importantly, individuals who recruited the right mid-lPFC to a larger degree during dual one-back performance also showed greater postural sway as measured by larger performance costs in total center of pressure displacements. This effect was selective to the high-load dual one-back task and suggests a crucial role of the right lPFC in allocating resources during cognitivemotor interference. Our study provides further insight into the mechanisms underlying cognitive-motor multitasking and its impairments.
The forcing from the anthropogenic heat flux (AHF), i.e. the dissipation of primary energy consumed by the human civilisation, produces a direct climate warming. Today, the globally averaged AHF is negligibly small compared to the indirect forcing from greenhouse gas emissions. Locally or regionally, though, it has a significant impact. Historical observations show a constant exponential growth of worldwide energy production. A continuation of this trend might be fueled or even amplified by the exploration of new carbon-free energy sources like fusion power. In such a scenario, the impacts of the AHF become a relevant factor for anthropogenic post-greenhouse gas climate change on the global scale, as well.
This master thesis aims at estimating the climate impacts of such a growing AHF forcing. In the first part of this work, the AHF is built into simple and conceptual, zero- and one-dimensional Energy Balance Models (EBMs), providing quick order of magnitude estimations of the temperature impact. In the one-dimensional EBM, the ice-albedo feedback from enhanced ice melting due to the AHF increases the temperature impact significantly compared to the zero-dimensional EBM.
Additionally, the forcing is built into a climate model of intermediate complexity, CLIMBER-3α. This allows for the investigation of the effect of localised AHF and gives further insights into the impact of the AHF on processes like the ocean heat uptake, sea ice and snow pattern changes
and the ocean circulation.
The global mean temperature response from the AHF today is of the order of 0.010 − 0.016 K in all reasonable model configurations tested. A transient tenfold increase of this forcing heats up the Earth System additionally by roughly 0.1 − 0.2 K in the presented models. Further growth
can also affect the tipping probability of certain climate elements.
Most renewable energy sources do not or only partially contribute to the AHF forcing as the energy from these sources dissipates anyway. Hence, the transition to a (carbon-free) renewable energy mix, which, in particular, does not rely on nuclear power, eliminates the local and global climate impacts from the increasing AHF forcing, independent of the growth of energy production.
Gershom Scholem (1897–1982) portrayed modern Zionist historical scholarship as both a rejection and a corrective fulfillment of earlier eras of Wissenschaft des Judentums. Through attacks on his scholarly predecessors, Scholem detailed his vision for the potential of this renaissance of Wissenschaft to entail both objective research and a commitment to treating Judaism as a “living organism,” an approach that would ultimately ensure the scholarship could deliver value to the Jewish community. This article will explore the tensions that arise from Scholem’s commitments, his occasional admissions of these tensions, and his attempts to overcome them.
Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker‐assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT‐PCR and GC‐MS profiling, respectively. Transcript marker candidates were selected from a published RNA‐Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions.
Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential(Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.
Rabbi Eliyahu Eliezer Dessler (1892–1953) is often portrayed as antagonistic to secular studies. However, his writings show more of an intellectual hierarchy that places Torah wisdom at the top and all other wisdom a distant second. R. Dessler expended great effort promoting Torah scholarship while generally refraining from disparaging secular studies. Looking at the writings of his predecessors in the Mussar (moralist) movement, one can see that there was no disapproval of worldly education there, either: In fact, R. Dessler and his predecessors were well-educated in many secular disciplines. This essay looks to places R. Dessler’s attitude toward Wissenschaft des Judentums within the context of his life’s mission to advance talmudic study and his consequent unwillingness to countenance anything that detracted from furthering the learning of Torah. I argue that, whereas his extreme opposition to Wissenschaft was the result of his aversion to its aims, methods and conclusions, his nuanced relationship to Orthodox Wissenschaft was the result of the hierarchy through which he viewed secular as opposed to talmudic study.
More than a billion people rely on water from rivers sourced in High Mountain Asia (HMA), a significant portion of which is derived from snow and glacier melt. Rural communities are heavily dependent on the consistency of runoff, and are highly vulnerable to shifts in their local environment brought on by climate change. Despite this dependence, the impacts of climate change in HMA remain poorly constrained due to poor process understanding, complex terrain, and insufficiently dense in-situ measurements.
HMA's glaciers contain more frozen water than any region outside of the poles. Their extensive retreat is a highly visible and much studied marker of regional and global climate change. However, in many catchments, snow and snowmelt represent a much larger fraction of the yearly water budget than glacial meltwaters. Despite their importance, climate-related changes in HMA's snow resources have not been well studied.
Changes in the volume and distribution of snowpack have complex and extensive impacts on both local and global climates. Eurasian snow cover has been shown to impact the strength and direction of the Indian Summer Monsoon -- which is responsible for much of the precipitation over the Indian Subcontinent -- by modulating earth-surface heating. Shifts in the timing of snowmelt have been shown to limit the productivity of major rangelands, reduce streamflow, modify sediment transport, and impact the spread of vector-borne diseases. However, a large-scale regional study of climate impacts on snow resources had yet to be undertaken.
Passive Microwave (PM) remote sensing is a well-established empirical method of studying snow resources over large areas. Since 1987, there have been consistent daily global PM measurements which can be used to derive an estimate of snow depth, and hence snow-water equivalent (SWE) -- the amount of water stored in snowpack. The SWE estimation algorithms were originally developed for flat and even terrain -- such as the Russian and Canadian Arctic -- and have rarely been used in complex terrain such as HMA.
This dissertation first examines factors present in HMA that could impact the reliability of SWE estimates. Forest cover, absolute snow depth, long-term average wind speeds, and hillslope angle were found to be the strongest controls on SWE measurement reliability. While forest density and snow depth are factors accounted for in modern SWE retrieval algorithms, wind speed and hillslope angle are not. Despite uncertainty in absolute SWE measurements and differences in the magnitude of SWE retrievals between sensors, single-instrument SWE time series were found to be internally consistent and suitable for trend analysis.
Building on this finding, this dissertation tracks changes in SWE across HMA using a statistical decomposition technique. An aggregate decrease in SWE was found (10.6 mm/yr), despite large spatial and seasonal heterogeneities. Winter SWE increased in almost half of HMA, despite general negative trends throughout the rest of the year. The elevation distribution of these negative trends indicates that while changes in SWE have likely impacted glaciers in the region, climate change impacts on these two pieces of the cryosphere are somewhat distinct.
Following the discussion of relative changes in SWE, this dissertation explores changes in the timing of the snowmelt season in HMA using a newly developed algorithm. The algorithm is shown to accurately track the onset and end of the snowmelt season (70% within 5 days of a control dataset, 89% within 10). Using a 29-year time series, changes in the onset, end, and duration of snowmelt are examined. While nearly the entirety of HMA has experienced an earlier end to the snowmelt season, large regions of HMA have seen a later start to the snowmelt season. Snowmelt periods have also decreased in almost all of HMA, indicating that the snowmelt season is generally shortening and ending earlier across HMA.
By examining shifts in both the spatio-temporal distribution of SWE and the timing of the snowmelt season across HMA, we provide a detailed accounting of changes in HMA's snow resources. The overall trend in HMA is towards less SWE storage and a shorter snowmelt season. However, long-term and regional trends conceal distinct seasonal, temporal, and spatial heterogeneity, indicating that changes in snow resources are strongly controlled by local climate and topography, and that inter-annual variability plays a significant role in HMA's snow regime.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
The 1920s witnessed a growing appearance of individual American Jews–
largely from wealthy and prominent families – who received training by Asian teachers and pursued Buddhist practices in Asian-founded Buddhist groups. Some of these American Jews gained prominence and leadership status in Buddhist communities and also ran their own semi-established Buddhist groups, with limited success. The social position and material success of these Jewish Buddhists allowed them the time and means to study and practice Buddhism. This paper illustrates these developments through the story of Julius Goldwater, a member of the prominent German Jewish family that included Senator Barry Goldwater. After encountering Buddhism in Hawaii and being ordained in Kyoto, Goldwater moved to Los Angeles to become one of the first European-American Jodo Shinshu ministers in America. This paper demonstrates how he was an early convert, teacher, and wartime proponent of American Buddhism.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fúquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60∘ N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
Understanding of wave environments is critical for the understanding of how particles are accelerated and lost in space. This study shows that in the vicinity of Europa and Ganymede, that respectively have induced and internal magnetic fields, chorus wave power is significantly increased. The observed enhancements are persistent and exceed median values of wave activity by up to 6 orders of magnitude for Ganymede. Produced waves may have a pronounced effect on the acceleration and loss of particles in the Jovian magnetosphere and other astrophysical objects. The generated waves are capable of significantly modifying the energetic particle environment, accelerating particles to very high energies, or producing depletions in phase space density. Observations of Jupiter's magnetosphere provide a unique opportunity to observe how objects with an internal magnetic field can interact with particles trapped in magnetic fields of larger scale objects.
Admixture is the hybridization between populations within one species. It can increase plant fitness and population viability by alleviating inbreeding depression and increasing genetic diversity. However, populations are often adapted to their local environments and admixture with distant populations could break down local adaptation by diluting the locally adapted genomes. Thus, admixed genotypes might be selected against and be outcompeted by locally adapted genotypes in the local environments. To investigate the costs and benefits of admixture, we compared the performance of admixed and within-population F1 and F2 generations of the European plant Lythrum salicaria in a reciprocal transplant experiment at three European field sites over a 2-year period. Despite strong differences between site and plant populations for most of the measured traits, including herbivory, we found limited evidence for local adaptation. The effects of admixture depended on experimental site and plant population, and were positive for some traits. Plant growth and fruit production of some populations increased in admixed offspring and this was strongest with larger parental distances. These effects were only detected in two of our three sites. Our results show that, in the absence of local adaptation, admixture may boost plant performance, and that this is particularly apparent in stressful environments. We suggest that admixture between foreign and local genotypes can potentially be considered in nature conservation to restore populations and/or increase population viability, especially in small inbred or maladapted populations.
Background: Flooding during seasonal monsoons affects millions of hectares of rice-cultivated areas across Asia. Submerged rice plants die within a week due to lack of oxygen, light and excessive elongation growth to escape the water. Submergence tolerance was first reported in an aus-type rice landrace, FR13A, and the ethylene-responsive transcription factor (TF) gene SUB1A-1 was identified as the major tolerance gene. Intolerant rice varieties generally lack
the SUB1A gene but some intermediate tolerant varieties, such as IR64, carry the allelic variant SUB1A-2. Differential effects of the two alleles have so far not been addressed. As a first step, we have therefore quantified and compared the expression of nearly 2500 rice TF genes between IR64 and its derived tolerant near isogenic line IR64-Sub1, which carries the SUB1A-1 allele. Gene expression was studied in internodes, where the main difference in expression between
the two alleles was previously shown.
Results: Nineteen and twenty-six TF genes were identified that responded to submergence in IR64 and IR64-Sub1,
respectively. Only one gene was found to be submergence-responsive in both, suggesting different regulatory pathways under submergence in the two genotypes. These differentially expressed genes (DEGs) mainly included MYB, NAC, TIFY and Zn-finger TFs, and most genes were downregulated upon submergence. In IR64, but not in IR64-Sub1,
SUB1B and SUB1C, which are also present in the Sub1 locus, were identified as submergence responsive. Four TFs were not submergence responsive but exhibited constitutive, genotype-specific differential expression. Most of the identified submergence responsive DEGs are associated with regulatory hormonal pathways, i.e. gibberellins (GA), abscisic acid (ABA), and jasmonic acid (JA), apart from ethylene. An in-silico promoter analysis of the two genotypes revealed the
presence of allele-specific single nucleotide polymorphisms, giving rise to ABRE, DRE/CRT, CARE and Site II cis-elements, which can partly explain the observed differential TF gene expression.
Conclusion: This study identified new gene targets with the potential to further enhance submergence tolerance in rice and provides insights into novel aspects of SUB1A-mediated tolerance.
As national efforts to reduce CO2 emissions intensify, policy-makers need increasingly specific, subnational information about the sources of CO2 and the potential reductions and economic implications of different possible policies. This is particularly true in China, a large and economically diverse country that has rapidly industrialized and urbanized and that has pledged under the Paris Agreement that its emissions will peak by 2030. We present new, city level estimates of CO2 emissions for 182 Chinese cities, decomposed into 17 different fossil fuels, 46 socioeconomic sectors, and 7 industrial processes. We find that more affluent cities have systematically lower emissions per unit of gross domestic product (GDP), supported by imports from less affluent, industrial cities located nearby. In turn, clusters of industrial cities are supported by nearby centers of coal or oil extraction. Whereas policies directly targeting manufacturing and electric power infrastructure would drastically undermine the GDP of industrial cities, consumption based policies might allow emission reductions to be subsidized by those with greater ability to pay. In particular, sector based analysis of each city suggests that technological improvements could be a practical and effective means of reducing emissions while maintaining growth and the current economic structure and energy system. We explore city-level emission reductions under three scenarios of technological progress to show that substantial reductions (up to 31%) are possible by updating a disproportionately small fraction of existing infrastructure.
We provide a detailed stochastic description of the swimming motion of an E. coli bacterium in two dimension, where we resolve tumble events in time. For this purpose, we set up two Langevin equations for the orientation angle and speed dynamics. Calculating moments, distribution and autocorrelation functions from both Langevin equations and matching them to the same quantities determined from data recorded in experiments, we infer the swimming parameters of E. coli. They are the tumble rate lambda, the tumble time r(-1), the swimming speed v(0), the strength of speed fluctuations sigma, the relative height of speed jumps eta, the thermal value for the rotational diffusion coefficient D-0, and the enhanced rotational diffusivity during tumbling D-T. Conditioning the observables on the swimming direction relative to the gradient of a chemoattractant, we infer the chemotaxis strategies of E. coli. We confirm the classical strategy of a lower tumble rate for swimming up the gradient but also a smaller mean tumble angle (angle bias). The latter is realized by shorter tumbles as well as a slower diffusive reorientation. We also find that speed fluctuations are increased by about 30% when swimming up the gradient compared to the reversed direction.
The present article offers a mixed-method perspective on the
investigation of determinants of effectiveness in quality assurance
at higher education institutions. We collected survey data from
German higher education institutions to analyse the degree to
which quality managers perceive their approaches to quality
assurance as effective. Based on this data, we develop an ordinary
least squares regression model which explains perceived
effectiveness through structural variables and certain quality
assurance-related activities of quality managers. The results show
that support by higher education institutions’ higher management
and cooperation with other education institutions are relevant
preconditions for larger perceived degrees of quality assurance
effectiveness. Moreover, quality managers’ role as promoters of
quality assurance exhibits significant correlations with perceived
effectiveness. In contrast, sanctions and the perception of quality
assurance as another administrative burden reveal negative
correlations.
Spotlight on islands
(2018)
Groups of proximate continental islands may conceal more tangled phylogeographic patterns than oceanic archipelagos as a consequence of repeated sea level changes, which allow populations to experience gene flow during periods of low sea level stands and isolation by vicariant mechanisms during periods of high sea level stands. Here, we describe for the first time an ancient and diverging lineage of the Italian wall lizard Podarcis siculus from the western Pontine Islands. We used nuclear and mitochondrial DNA sequences of 156 individuals with the aim of unraveling their phylogenetic position, while microsatellite loci were used to test several a priori insular biogeographic models of migration with empirical data. Our results suggest that the western Pontine populations colonized the islands early during their Pliocene volcanic formation, while populations from the eastern Pontine Islands seem to have been introduced recently. The inter-island genetic makeup indicates an important role of historical migration, probably due to glacial land bridges connecting islands followed by a recent vicariant mechanism of isolation. Moreover, the most supported migration model predicted higher gene flow among islands which are geographically arranged in parallel. Considering the threatened status of small insular endemic populations, we suggest this new evolutionarily independent unit be given priority in conservation efforts.
Stuck in the past?
(2018)
After the Civil War the Spanish army functioned as a guardian of domestic order, but suffered from antiquated material and little financial means. These factors have been described as fundamental reasons for the army’s low potential wartime capability. This article draws on British and German sources to demonstrate how Spanish military culture prevented an augmented effectiveness and organisational change. Claiming that the army merely lacked funding and modern equipment, falls considerably short in grasping the complexities of military effectiveness and organisational cultures, and might prove fatal for current attempts to develop foreign armed forces in conflict or post-conflict zones.