Refine
Has Fulltext
- yes (722) (remove)
Year of publication
- 2018 (722) (remove)
Document Type
- Postprint (287)
- Article (187)
- Doctoral Thesis (140)
- Monograph/Edited Volume (21)
- Review (21)
- Working Paper (19)
- Part of Periodical (14)
- Master's Thesis (11)
- Conference Proceeding (7)
- Other (6)
Keywords
- climate change (9)
- dynamics (7)
- adaptation (6)
- climate-change (6)
- permafrost (6)
- expression (5)
- Berlin (4)
- Deutschland (4)
- football (4)
- inflammation (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (106)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (49)
- Vereinigung für Jüdische Studien e. V. (47)
- Institut für Geowissenschaften (45)
- Institut für Biochemie und Biologie (44)
- Humanwissenschaftliche Fakultät (40)
- MenschenRechtsZentrum (36)
- Institut für Physik und Astronomie (33)
- Institut für Chemie (27)
- Institut für Umweltwissenschaften und Geographie (23)
Kommunale Kinder- und Jugendgremien sind eine Möglichkeit, junge Menschen in der Kommune zu beteiligen. Die Masterarbeit beschäftigt sich mit den kommunalen Kinder- und Jugendgremien im Land Brandenburg, über die es in der wissenschaftlichen Literatur nur wenige Erkenntnisse gibt. Die Arbeit gibt einen Überblick über die bestehenden Gremien und zeigt, welche Kinder und Jugendliche sich beteiligen und wie der Entwicklungsstand der Gremien ist. Ausgehend von der Partizipationsleiter von Hart geht die Arbeit zudem in zwei Fallstudien in Senftenberg und Oranienburg der Frage nach, ob es sich bei den Kinder- und Jugendgremien um ernstgemeinte Partizipation handelt.
Language and Arithmetic
(2018)
We examined cross-domain semantic priming effects between arithmetic and language. We paired subtractions with their linguistic equivalent, exception phrases (EPs) with positive quantifiers (e.g., “everybody except John”) while pairing additions with their own linguistic equivalent, EPs with negative quantifiers (e.g., “nobody except John”; Moltmann, 1995). We hypothesized that EPs with positive quantifiers prime subtractions and inhibit additions while EPs with negative quantifiers prime additions and inhibit subtractions. Furthermore, we expected similar priming and inhibition effects from arithmetic into semantics. Our design allowed for a bidirectional analysis by using one trial's target as the prime for the next trial. Two experiments failed to show significant priming effects in either direction. Implications and possible shortcomings are explored in the general discussion.
Thomas Morus: Utopia
(2018)
In Thomas Morus’ Utopia wird intensiv über die in einem idealen Staat herrschenden Verhältnisse nachgedacht. Für den Lateinunterricht empfiehlt sich dieses neulateinische Werk, weil Schüler durch seine Lektüre zum einen erkennen, dass die lateinische Sprache nach dem Untergang des römischen Reiches fortlebte, und weil sie zum anderen zu allgemeinen Reflexionen über vorbildhafte Gesellschaftsordnungen angeregt und für die dabei zu berücksichtigenden Aspekte sensibilisiert werden. So entsteht in ihnen ein Bewusstsein für die Grundfesten eines harmonischen Zusammenlebens. Das vorliegende Lektüreheft bietet umfangreiches, didaktisch aufbereitetes Material, das Lateinschülern echtes Lesevergnügen bereitet und das Lehrkräfte ohne großen Aufwand im Unterricht einsetzen können. Diese Publikation schließt damit eine für die Utopia bislang bestehende Lücke und lässt hoffen, dass das Werk künftig einen festen Platz im Lateinunterricht erhält.
The formation and breaching of natural dammed lakes have formed the landscapes, especially in seismically active high-mountain regions. Dammed lakes pose both, potential water resources, and hazard in case of dam breaching. Central Asia has mostly arid and semi-arid climates. Rock glaciers already store more water than ice-glaciers in some semi-arid regions of the world, but their distribution and advance mechanisms are still under debate in recent research. Their impact on the water availability in Central Asia will likely increase as temperatures rise and glaciers diminish.
This thesis provides insight to the relative age distribution of selected Kyrgyz and Kazakh rock glaciers and their single lobes derived from lichenometric dating. The size of roughly 8000 different lichen specimens was used to approximate an exposure age of the underlying debris surface. We showed that rock-glacier movement differs signifcantly on small scales. This has several implications for climatic inferences from rock glaciers. First, reactivation of their lobes does not necessarily point to climatic changes, or at least at out-of-equilibrium conditions. Second, the elevations of rock-glacier toes can no longer be considered as general indicators of the limit of sporadic mountain permafrost as they have been used traditionally.
In the mountainous and seismically active region of Central Asia, natural dams, besides rock glaciers, also play a key role in controlling water and sediment infux into river valleys. However, rock glaciers advancing into valleys seem to be capable of infuencing the stream network, to dam rivers, or to impound lakes. This influence has not previously been addressed. We quantitatively explored these controls using a new inventory of 1300 Central Asian rock glaciers. Elevation, potential incoming solar radiation, and the size of rock glaciers and their feeder basins played key roles in predicting dam appearance. Bayesian techniques were used to credibly distinguish between lichen sizes on rock glaciers and their lobes, and to find those parameters of a rock-glacier system that are most credibly expressing the potential to build natural dams.
To place these studies in the region's history of natural dams, a combination of dating of former lake levels and outburst flood modelling addresses the history and possible outburst flood hypotheses of the second largest mountain lake of the world, Issyk Kul in Kyrgyzstan. Megafoods from breached earthen or glacial dams were found to be a likely explanation for some of the lake's highly fluctuating water levels. However, our detailed analysis of candidate lake sediments and outburst-flood deposits also showed that more localised dam breaks to the west of Issyk Kul could have left similar geomorphic and sedimentary evidence in this Central Asian mountain landscape. We thus caution against readily invoking megafloods as the main cause of lake-level drops of Issyk Kul. In summary, this thesis addresses some new pathways for studying rock glaciers and natural dams with several practical implications for studies on mountain permafrost and natural hazards.
Räume, Linien, Punkte
(2018)
Schulpraktische Studien werden vermehrt als „Herzstück“ der Lehrerbildung betrachtet; zu ihren Effekten auf die Kompetenzentwicklung der Studierenden liegen bisher jedoch nur wenige Erkenntnisse vor. Auf der Grundlage des Konzepts zu Standards und Kompetenzen in Schulpraktischen Studien nimmt das PSI-Teilprojekt „Kompetenzerwerb in Schulpraktischen Studien – Spiralcurriculum“ eine erste empirische Analyse der fünf bildungswissenschaftlichen und fachdidaktischen Praktika im Potsdamer Modell der Lehrerbildung vor. Eine ausgewählte Kohorte Lehramtsstudierender wird durch alle fünf Schulpraktischen Studien in der Bachelor- und Masterphase begleitet und anhand eines Online-Fragebogens zu ihrem Kompetenzerwerb befragt. Besonders zeichnet sich die Studie durch das quer- sowie längsschnittliche Forschungsdesign aus, das im folgenden Beitrag anhand der Skizzierung des Erhebungsinstruments – einem umfassenden, mehrteiligen Online-Fragebogen – vorgestellt wird. Vor dem Hintergrund des theoretischen Projektmodells wird die Entwicklung des Fragebogens mit Angaben zur statistischen Güte unterlegt und durch Perspektiven des Forschungsprojekts ergänzt.
Understanding how humans move their eyes is an important part for understanding the functioning of the visual system. Analyzing eye movements from observations of natural scenes on a computer screen is a step to understand human visual behavior in the real world. When analyzing eye-movement data from scene-viewing experiments, the impor- tant questions are where (fixation locations), how long (fixation durations) and when (ordering of fixations) participants fixate on an image. By answering these questions, computational models can be developed which predict human scanpaths. Models serve as a tool to understand the underlying cognitive processes while observing an image, especially the allocation of visual attention.
The goal of this thesis is to provide new contributions to characterize and model human scanpaths on natural scenes. The results from this thesis will help to understand and describe certain systematic eye-movement tendencies, which are mostly independent of the image. One eye-movement tendency I focus on throughout this thesis is the tendency to fixate more in the center of an image than on the outer parts, called the central fixation bias. Another tendency, which I will investigate thoroughly, is the characteristic distribution of angles between successive eye movements.
The results serve to evaluate and improve a previously published model of scanpath generation from our laboratory, the SceneWalk model. Overall, six experiments were conducted for this thesis which led to the following five core results:
i) A spatial inhibition of return can be found in scene-viewing data. This means that locations which have already been fixated are afterwards avoided for a certain time interval (Chapter 2).
ii) The initial fixation position when observing an image has a long-lasting influence of up to five seconds on further scanpath progression (Chapter 2 & 3).
iii) The often described central fixation bias on images depends strongly on the duration of the initial fixation. Long-lasting initial fixations lead to a weaker central fixation bias than short fixations (Chapter 2 & 3).
iv) Human observers adjust their basic eye-movement parameters, like fixation dura- tions and saccade amplitudes, to the visual properties of a target they look for in visual search (Chapter 4).
v) The angle between two adjacent saccades is an indicator for the selectivity of the upcoming saccade target (Chapter 4).
All results emphasize the importance of systematic behavioral eye-movement tenden- cies and dynamic aspects of human scanpaths in scene viewing.
In the first years of life, children differ greatly from adults in the temporal organization of their speech gestures in fluent language production. However, dissent remains as to the maturational direction of such organization. The present study sheds new light on this process by tracking the development of anticipatory vowel-to-vowel coarticulation in a cross-sectional investigation of 62 German children (from 3.5 to 7 years of age) and 13 adults. It focuses on gestures of the tongue, a complex organ whose spatiotemporal control is indispensable for speech production. The goal of the study was threefold: 1) investigate whether children as well as adults initiate the articulation for a target vowel in advance of its acoustic onset, 2) test if the identity of the intervocalic consonant matters and finally, 3) describe age-related developments of these lingual coarticulatory patterns. To achieve this goal, ultrasound tongue imaging was used to record lingual movements and quantify changes in coarticulation degree as a function of consonantal context and age. Results from linear mixed effects models indicate that like adults, children initiate vowels' lingual gestures well ahead of their acoustic onset. Second, while the identity of the intervocalic consonant affects the degree of vocalic anticipation in adults, it does not in children at any age. Finally, the degree of vowelto-vowel coarticulation is significantly higher in all cohorts of children than in adults. However, among children, a developmental decrease of vocalic coarticulation is only found for sequences including the alveolar stop /d/ which requires finer spatiotemporal coordination of the tongue's subparts compared to labial and velar stops. Altogether, results suggest greater gestural overlap in child than in adult speech and support the view of a non-uniform and protracted maturation of lingual coarticulation calling for thorough considerations of the articulatory intricacies from which subtle developmental differences may originate.
Die Entstehung der modernen britischen Nachrichtendienstarchitektur fiel in die erste Hälfte des zwanzigsten Jahrhunderts. Zeitgleich erfuhr die britische Gesellschaft eine nie dagewesene Demokratisierung. Die Arbeit versucht darzulegen, wie auch vermeintlich arkane Bereiche staatlichen Handelns in öffentliche Aushandlungsprozesse eingebettet sind und rekonstruiert deshalb erstmals systematisch öffentliche und fachöffentliche Diskurse über Nachrichtendienste Großbritanniens im Zeitalter der Weltkriege.
Im Rahmen eines Informatikstudiums wird neben theoretischen Grundlagen und Programmierfähigkeiten auch gezielt vermittelt, wie moderne Software in der Praxis entwickelt wird. Dabei wird oftmals eine Form der Projektarbeit gewählt, um Studierenden möglichst realitätsnahe Erfahrungen zu ermöglichen. Die Studierenden entwickeln einzeln oder in kleineren Teams Softwareprodukte für ausgewählte Problemstellungen. Neben fachlichen Inhalte stehen durch gruppendynamische Prozesse auch überfachliche Kompetenzen im Fokus. Dieser Beitrag präsentiert eine Interviewstudie mit Dozierenden von Softwareprojektpraktika an der RWTH Aachen und konzentriert sich auf die Ausgestaltung der Veranstaltungen sowie Förderung von überfachlichen Kompetenzen nach einem Kompetenzprofil für Softwareingenieure.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
The Sun is the nearest star to the Earth. It consists of an interior and an atmosphere. The convection zone is the outermost layer of the solar interior. A flux rope may emerge as a coherent structure from the convection zone into the solar atmosphere or be formed by magnetic reconnection in the atmosphere. A flux rope is a bundle of magnetic field lines twisting around an axis field line, creating a helical shape by which dense filament material can be supported against gravity. The flux rope is also considered as the key structure of the most energetic phenomena in the solar system, such as coronal mass ejections (CMEs) and flares. These magnetic flux ropes can produce severe geomagnetic storms. In particular, to improve the ability to forecast space weather, it is important to enrich our knowledge about the dynamic formation of flux ropes and the underlying physical mechanisms that initiate their eruption, such as a CME.
A confined eruption consists of a filament eruption and usually an associated are, but does not evolve into a CME; rather, the moving plasma is halted in the solar corona and usually seen to fall back. The first detailed observations of a confined filament eruption were obtained on 2002 May 27by the TRACE satellite in the 195 A band. So, in the Chapter 3, we focus on a flux rope instability model. A twisted flux rope can become unstable by entering the kink instability regime. We show that the kink instability, which occurs if the twist of a flux rope exceeds a critical value, is capable of initiating of an eruption. This model is tested against the well observed confined eruption on 2002 May 27 in a parametric magnetohydrodynamic (MHD) simulation study that comprises all phases of the event. Very good agreement with the essential observed properties is obtained, only except for a relatively poor matching of the initial filament height.
Therefore, in Chapter 4, we submerge the center point of the flux rope deeper below the photosphere to obtain a flatter coronal rope section and a better matching with the initial height profile of the erupting filament. This implies a more realistic inclusion of the photospheric line tying. All basic assumptions and the other parameter settings are kept the same as in Chapter 3. This complement of the parametric study shows that the flux rope instability model can yield an even better match with the observational data. We also focus in Chapters 3 and 4 on the magnetic reconnection during the confined eruption, demonstrating that it occurs in two distinct locations and phases that correspond to the observed brightenings and changes of topology, and consider the fate of the erupting flux, which can reform a (less twisted) flux rope.
The Sun also produces series of homologous eruptions, i.e. eruptions which occur repetitively in the same active region and are of similar morphology. Therefore, in Chapter 5, we employ the reformed flux rope as a new initial condition, to investigate the possibility of subsequent homologous eruptions. Free magnetic energy is built up by imposing motions in the bottom boundary, such as converging motions, leading to flux cancellation. We apply converging motions in the sunspot area, such that a small part of the flux from the sunspots with different polarities is transported toward the polarity inversion line (PIL) and cancels with each other. The reconnection associated with the cancellation process forms more helical magnetic flux around the reformed flux rope, which leads to a second and a third eruption. In this study, we obtain the first MHD simulation results of a homologous sequence of eruptions that show a transition from a confined to two ejective eruptions, based on the reformation of a flux rope after each eruption.
Degeneration of the intervertebral disc – triggered by ageing, mechanical stress, traumatic injury, infection, inflammation and other factors – has a significant role in the development of low back pain. Back pain not only has a high prevalence, but also a major socio-economic impact. With the ageing population, its occurrence and costs are expected to grow even more in the future. Disc degeneration is characterized by matrix breakdown, loss in proteoglycans and thus water content, disc height loss and an increase in inflammatory molecules. The accumulation of cytokines, such as interleukin (IL)-1 , IL-8 or tumor necrosis factor (TNF)-, together with age-related immune deficiency, leads to the so-called inflammaging – low-grade, chronic inflammation with a crucial role in pain development. Despite the relevance of these molecular processes, current therapies target symptoms, but not underlying causes. This review describes the biological and biomechanical changes that occur in a degenerated disc, discusses the connection between disc degeneration and inflammaging, highlights factors that enhance the inflammatory processes in disc pathologies and suggests future research avenues.
Intervertebral disc (IVD) cells are naturally exposed to high osmolarity and complex mechanical loading, which drive microenvironmental osmotic changes. Age- and degeneration-induced degradation of the IVD's extracellular matrix causes osmotic imbalance, which, together with an altered function of cellular receptors and signalling pathways, instigates local osmotic stress. Cellular responses to osmotic stress include osmoadaptation and activation of pro-inflammatory pathways. This review summarises the current knowledge on how IVD cells sense local osmotic changes and translate these signals into physiological or pathophysiological responses, with a focus on inflammation. Furthermore, it discusses the expression and function of putative membrane osmosensors (e.g. solute carrier transporters, transient receptor potential channels, aquaporins and acid-sensing ion channels) and osmosignalling mediators [e.g. tonicity responseelement-binding protein/nuclear factor of activated T-cells 5 (TonEBP/NFAT5), nuclear factor kappa-lightchain-enhancer of activated B cells (NF-kappa B)] in healthy and degenerated IVDs. Finally, an overview of the potential therapeutic targets for modifying osmosensing and osmosignalling in degenerated IVDs is provided.
Mit der New Economic Geography (NEG) kann die Verteilung von Unternehmen und Arbeitskräften auf Regionen modellhaft diskutiert werden. In diesem Beitrag wird untersucht, welche räumlichen Verteilungen der mobilen Arbeitskräfte und Unternehmen in einem NEG-Modellansatz resultieren, wenn die Größe einer Region und damit der ihr zur Verfügung stehende Boden, die zu überwindende Distanz für den Gütertransport innerhalb der Regionen, sowie Bodennutzungskonkurrenzen zwischen Wohnen, Industrie und Landwirtschaft berücksichtigt werden. Auch wird der Frage nachgegangen, welche Wohlfahrtswirkungen hierbei resultieren.
This publication based thesis, which consists of seven published articles, summarizes my contributions to the research field of laser excited ultrafast structural dynamics. The coherent and incoherent lattice dynamics on microscopic length scales are detected by ultrashort optical and X-ray pulses. The understanding of the complex physical processes is essential for future improvements of technological applications. For this purpose, tabletop soruces and large scale facilities, e.g. synchrotrons, are employed to study structural dynamics of longitudinal acoustic strain waves and heat transport. The investigated effects cover timescales from hundreds of femtoseconds up to several microseconds.
The main part of this thesis is dedicated to the investigation of tailored phonon wave packets propagating in perovskite nanostructures. Tailoring is achieved either by laser excitation of nanostructured bilayer samples or by a temporal series of laser pulses. Due to the propagation of longitudinal acoustic phonons, the out-of-plane lattice spacing of a thin film insulator-metal bilayer sample is modulated on an ultrafast timescale. This leads to an ultrafast modulation of the X-ray diffraction efficiency which is employed as a phonon Bragg switch to shorten hard X-ray pulses emitted from a 3rd generation synchrotron.
In addition, we have observed nonlinear mixing of high amplitude phonon wave packets which originates from an anharmonic interatomic potential. A chirped optical pulse sequence excites a narrow band phonon wave packet with specific momentum and energy. The second harmonic generation of these phonon wave packets is followed by ultrafast X-ray diffraction. Phonon upconversion takes place because the high amplitude phonon wave packet modulates the acoustic properties of the crystal which leads to self steepening and to the successive generation of higher harmonics of the phonon wave packet.
Furthermore, we have demonstrated ultrafast strain in direction parallel to the sample surface. Two consecutive so-called transient grating excitations displaced in space and time are used to coherently control thermal gradients and surface acoustic modes. The amplitude of the coherent and incoherent surface excursion is disentangled by time resolved X-ray reflectivity measurements. We calibrate the absolute amplitude of thermal and acoustic surface excursion with measurements of longitudinal phonon propagation. In addition, we develop a diffraction model which allows for measuring the surface excursion on an absolute length scale with sub-Äangström precision. Finally, I demonstrate full coherent control of an excited surface deformation by amplifying and suppressing thermal and coherent excitations at the surface of a laser-excited Yttrium-manganite sample.
The rapid development and integration of Information Technologies over the last decades influenced all areas of our life, including the business world. Yet not only the modern enterprises become digitalised, but also security and criminal threats move into the digital sphere. To withstand these threats, modern companies must be aware of all activities within their computer networks.
The keystone for such continuous security monitoring is a Security Information and Event Management (SIEM) system that collects and processes all security-related log messages from the entire enterprise network. However, digital transformations and technologies, such as network virtualisation and widespread usage of mobile communications, lead to a constantly increasing number of monitored devices and systems. As a result, the amount of data that has to be processed by a SIEM system is increasing rapidly. Besides that, in-depth security analysis of the captured data requires the application of rather sophisticated outlier detection algorithms that have a high computational complexity. Existing outlier detection methods often suffer from performance issues and are not directly applicable for high-speed and high-volume analysis of heterogeneous security-related events, which becomes a major challenge for modern SIEM systems nowadays.
This thesis provides a number of solutions for the mentioned challenges. First, it proposes a new SIEM system architecture for high-speed processing of security events, implementing parallel, in-memory and in-database processing principles. The proposed architecture also utilises the most efficient log format for high-speed data normalisation. Next, the thesis offers several novel high-speed outlier detection methods, including generic Hybrid Outlier Detection that can efficiently be used for Big Data analysis. Finally, the special User Behaviour Outlier Detection is proposed for better threat detection and analysis of particular user behaviour cases.
The proposed architecture and methods were evaluated in terms of both performance and accuracy, as well as compared with classical architecture and existing algorithms. These evaluations were performed on multiple data sets, including simulated data, well-known public intrusion detection data set, and real data from the large multinational enterprise. The evaluation results have proved the high performance and efficacy of the developed methods.
All concepts proposed in this thesis were integrated into the prototype of the SIEM system, capable of high-speed analysis of Big Security Data, which makes this integrated SIEM platform highly relevant for modern enterprise security applications.
The importance of plasmonic heating for the plasmondriven photodimerization of 4-nitrothiophenol
(2018)
Metal nanoparticles form potent nanoreactors, driven by the optical generation of energetic electrons and nanoscale heat. The relative influence of these two factors on nanoscale chemistry is strongly debated. This article discusses the temperature dependence of the dimerization of 4-nitrothiophenol (4-NTP) into 4,4′-dimercaptoazobenzene (DMAB) adsorbed on gold nanoflowers by Surface-Enhanced Raman Scattering (SERS). Raman thermometry shows a significant optical heating of the particles. The ratio of the Stokes and the anti-Stokes Raman signal moreover demonstrates that the molecular temperature during the reaction rises beyond the average crystal lattice temperature of the plasmonic particles. The product bands have an even higher temperature than reactant bands, which suggests that the reaction proceeds preferentially at thermal hot spots. In addition, kinetic measurements of the reaction during external heating of the reaction environment yield a considerable rise of the reaction rate with temperature. Despite this significant heating effects, a comparison of SERS spectra recorded after heating the sample by an external heater to spectra recorded after prolonged illumination shows that the reaction is strictly photo-driven. While in both cases the temperature increase is comparable, the dimerization occurs only in the presence of light. Intensity dependent measurements at fixed temperatures confirm this finding.
The article examines the work of Rabbi Yitzhak Isaac Halevy, arguably the most significant Orthodox response to the Wissenschaft des Judentums school of historiography. Halevy himself exemplified the Orthodox struggle against Wissenschaft, yet his work expressed a commitment to modern historiographical discipline that suggested an internalization of some of the very same premises adopted by Wissenschaft. While criticizing the representatives of Wissenschaft, Halevy was, at the same time, fighting for the internalization of its innovative characteristics into Orthodox society. He saw himself as a leader of a movement working towards the development of Orthodox Jewish studies and his application of modern historiographic principles from an Orthodox worldview as creating critical Orthodox historiography. Halevy’s approach promotes an understanding of Orthodoxy as a complex phenomenon, of which the struggle against modern secularization is just one of many characteristics.
Die Kernfrage der vorliegenden Arbeit lautet: Sichert die Schuldenbremse die fiskalische Nachhaltigkeit in Deutschland? Zur Beantwortung dieser Frage wird zunächst untersucht, welche Vor-Wirkungen die Einführung der Schuldenbremse im Zeitraum 2010-16 auf die deutschen Bundesländer zeitigte. Dafür wurden die beobachtete Konsolidierungsleistung und der 2009 bestehende Konsolidierungsanreiz bzw. –druck der Bundesländer mit Hilfe einer eigens zu diesem Zweck entwickelten Scorecard evaluiert. Mittels multipler Regressionsanalyse wurde dann analysiert, wie die Faktoren der Scorecard die Konsolidierungsleistung der Bun- desländer beeinflussen. Dabei wurde festgestellt, dass beinahe 90% der Variation, durch die unabhängigen Variablen Haushaltslage, Schuldenlast, Einnahmenwachstum und Pensionslast erklärt werden und der Schuldenbremse bei der Konsolidierungsepisode 2009-2016 eher eine untergeordnete Rolle zugefallen sein dürfte. Anschließend wurde mithilfe der in 65 Expertinneninterviews gesammelten Daten analysiert, welche Grenzen der neuen Fiskalregel in ihrem Wirken gesetzt sind, bzw. welche Risiken zukünftig die Einhaltung der Schuldenbremse erschweren oder verhindern könnten: Kommunalverschuldung, FEUs, Eventualverpflichtungen in Form von Bürgschaften für Finanzinstitute und Pensionsverpflichtungen. Die häufig geäußerten Kritikpunkte, die Schuldenbremse sei eine Konjunktur- und Investitionsbremse werden ebenfalls überprüft und zurückgewiesen. Schließlich werden potentielle zukünftige Entwicklungen hinsichtlich der Schuldenbremse und der öffentlichen Verwaltung in Deutschland sowie der Konsolidierungsbemühungen der Länder erörtert.
We report two corpus analyses to examine the impact of animacy, definiteness, givenness and type of referring expression on the ordering of double objects in the spontaneous speech of German-speaking two- to four-year-old children and the child-directed speech of their mothers. The first corpus analysis revealed that definiteness, givenness and type of referring expression influenced word order variation in child language and child-directed speech when the type of referring expression distinguished between pronouns and lexical noun phrases. These results correspond to previous child language studies in English (e.g., de Marneffe et al. 2012). Extending the scope of previous studies, our second corpus analysis examined the role of different pronoun types on word order. It revealed that word order in child language and child-directed speech was predictable from the types of pronouns used. Different types of pronouns were associated with different sentence positions but also showed a strong correlation to givenness and definiteness. Yet, the distinction between pronoun types diminished the effects of givenness so that givenness had an independent impact on word order only in child-directed speech but not in child language. Our results support a multi-factorial approach to word order in German. Moreover, they underline the strong impact of the type of referring expression on word order and suggest that it plays a crucial role in the acquisition of the factors influencing word order variation.
Ismar Elbogen (1874–1943) and Franz Rosenzweig (1886-1929) were both pioneers in Jewish thought and culture. Elbogen authored the most comprehensive study on Jewish liturgy, while Rosenzweig’s magnum opus The Star of Redemption has emerged as one of the twentieth century’s most innovative and elusive works of Jewish thought. Even though Rosenzweig is not known for his work on or appreciation for the Wissenschaft des Judentums, this article will explore this overlooked aspect of his thought by exploring the influence of Ismar Elbogen. Commentaries to Rosenzweig’s views on prayer are numerous, yet none mention the work of Elbogen. This is a problem. By comparing Elbogen’s work on Jewish liturgy with Rosenzweig’s writings on prayer in the Star, we are able to demonstrate how methods seminal to the Wissenschaft des Judentums helped articulate several of Rosenzweig’s most innovative contributions to Jewish thought.
We present an optically addressed non-pixelated spatial light modulator. The system is based on reversible photoalignment of a LC cell using a red light sensitive novel azobenzene photoalignment layer. It is an electrode-free device that manipulates the liquid crystal orientation and consequently the polarization via light without artifacts caused by electrodes. The capability to miniaturize the spatial light modulator allows the integration into a microscope objective. This includes a miniaturized 200 channel optical addressing system based on a VCSEL array and hybrid refractive-diffractive beam shapers. As an application example, the utilization as a microscope objective integrated analog phase contrast modulator is shown. (C) 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Schools as acculturative and developmental contexts for youth of immigrant and refugee background
(2018)
Schools are important for the academic and socio-emotional development, as well as acculturation of immigrant-and refugee-background youth. We highlight individual differences which shape their unique experiences, while considering three levels of the school context in terms of how they may affect adaptation outcomes: (1) interindividual interactions in the classroom (such as peer relations, student-teacher relations, teacher beliefs, and teaching practices), (2) characteristics of the classroom or school (such as ethnic composition and diversity climate), and (3) relevant school-and nation-level policies (such as diversity policies and school tracking). Given the complexity of the topic, there is a need for more research taking an integrated and interdisciplinary perspective to address migration related issues in the school context. Teacher beliefs and the normative climate in schools seem particularly promising points for intervention, which may be easier to change than structural aspects of the school context. More inclusive schools are also an important step toward more peaceful interethnic relations in diverse societies.
The improvement of power is an objective in training of athletes. In order to detect effective methods of exercise, basic research is required regarding the mechanisms of muscular activity. The purpose of this study is to investigate whether or not a muscular pre-activation prior to an external impulse-like force impact has an effect on the maximal explosive eccentric Adaptive Force (xpAFeccmax). This power capability combines different probable power enhancing mechanisms. To measure the xpAFeccmax an innovative pneumatic device was used. During measuring, the subject tries to hold an isometric position as long as possible. In the moment in which the subjects’ maximal isometric holding strength is exceeded, it merges into eccentric muscle action. This process is very close to motions in sports, where an adaptation of the neuromuscular system is required, e.g., force impacts caused by uneven surfaces during skiing. For investigating the effect of pre-activation on the xpAFeccmax of the quadriceps femoris muscle, n = 20 subjects had to pass three different pre-activation levels in a randomized order (level 1: 0.4 bar, level 2: 0.8 bar, level 3: 1.2 bar). After adjusting the standardized pre-pressure by pushing against the interface, an impulse-like load impacted on the distal tibia of the subject. During this, the xpAFeccmax was detected. The maximal voluntary isometric contraction (MVIC) was also measured. The torque values of the xpAFeccmax were compared with regard to the pre-activation levels. The results show a significant positive relation between the pre-activation of the quadriceps femoris muscle and the xpAFeccmax (male: p = 0.000, η2= 0.683; female: p = 0.000, η2= 0.907). The average percentage increase of torque amounted +28.15 ± 25.4% between MVIC and xpAFeccmax with pre-pressure level 1, +12.09 ± 7.9% for the xpAFeccmax comparing pre-pressure levels 1 vs. 2 and +2.98 ± 4.2% comparing levels 2 and 3. A higher but not maximal muscular activation prior to a fast impacting eccentric load seems to produce an immediate increase of force outcome. Different possible physiological explanatory approaches and the use as a potential training method are discussed.
Previous research has shown that electrical muscle activity is able to synchronize between muscles of one subject. The ability to synchronize the mechanical muscle oscillations measured by Mechanomyography (MMG) is not described sufficiently. Likewise, the behavior of myofascial oscillations was not considered yet during muscular interaction of two human subjects. The purpose of this study is to investigate the myofascial oscillations intra- and interpersonally. For this the mechanical muscle oscillations of the triceps and the abdominal external oblique muscles were measured by MMG and the triceps tendon was measured by mechanotendography (MTG) during isometric interaction of two subjects (n = 20) performed at 80% of the MVC using their arm extensors. The coherence of MMG/MTG-signals was analyzed with coherence wavelet transform and was compared with randomly matched signal pairs. Each signal pairing shows significant coherent behavior. Averagely, the coherent phases of n = 485 real pairings last over 82 ± 39 % of the total duration time of the isometric interaction. Coherent phases of randomly matched signal pairs take 21 ± 12 % of the total duration time (n = 39). The difference between real vs. randomly matched pairs is significant (U = 113.0, p = 0.000, r = 0.73). The results show that the neuromuscular system seems to be able to synchronize to another neuromuscular system during muscular interaction and generate a coherent behavior of the mechanical muscular oscillations. Potential explanatory approaches are discussed.
Abstract
Background
The unisexual Amazon molly (Poecilia formosa) originated from a hybridization between two sexual species, the sailfin molly (Poecilia latipinna) and the Atlantic molly (Poecilia mexicana). The Amazon molly reproduces clonally via sperm-dependent parthenogenesis (gynogenesis), in which the sperm of closely related species triggers embryogenesis of the apomictic oocytes, but typically does not contribute genetic material to the next generation. We compare for the first time the gonadal transcriptome of the Amazon molly to those of both ancestral species, P. mexicana and P. latipinna.
Results
We sequenced the gonadal transcriptomes of the P. formosa and its parental species P. mexicana and P. latipinna using Illumina RNA-sequencing techniques (paired-end, 100 bp). De novo assembly of about 50 million raw read pairs for each species was performed using Trinity, yielding 106,922 transcripts for P. formosa, 115,175 for P. latipinna, and 133,025 for P. mexicana after eliminating contaminations. On the basis of sequence similarity comparisons to other teleost species and the UniProt databases, functional annotation, and differential expression analysis, we demonstrate the similarity of the transcriptomes among the three species. More than 40% of the transcripts for each species were functionally annotated and about 70% were assigned to orthologous genes of a closely related species. Differential expression analysis between the sexual and unisexual species uncovered 2035 up-regulated and 564 down-regulated genes in P. formosa. This was exemplary validated for six genes by qRT-PCR.
Conclusions
We identified more than 130 genes related to meiosis and reproduction within the apomictically reproducing P. formosa. Overall expression of these genes seems to be down-regulated in the P. formosa transcriptome compared to both ancestral species (i.e., 106 genes down-regulated, 29 up-regulated). A further 35 meiosis and reproduction related genes were not found in the P. formosa transcriptome, but were only expressed in the sexual species. Our data support the hypothesis of general down-regulation of meiosis-related genes in the apomictic Amazon molly. Furthermore, the obtained dataset and identified gene catalog will serve as a resource for future research on the molecular mechanisms behind the reproductive mode of this unisexual species.
Objective
We investigated the potential role of indirect benefits for female mate preferences in a highly promiscuous species of live-bearing fishes, the sailfin molly Poecilia latipinna using an integrative approach that combines methods from animal behavior, life-history evolution, and genetics. Males of this species solely contribute sperm for reproduction, and consequently females do not receive any direct benefits. Despite this, females typically show clear mate preferences. It has been suggested that females can increase their reproductive success through indirect benefits from choosing males of higher quality.
Results
Although preferences for large body size have been recorded as an honest signal for genetic quality, this particular study resulted in female preference being unaffected by male body size. Nonetheless, larger males did sire more offspring, but with no effect on offspring quality. This study presents a methodical innovation by combining preference testing with life history measurements—such as the determination of the dry weight of fish embryos—and paternity analyses on single fish embryos.
Ice-wedge polygons are common features of northeastern Siberian lowland periglacial tundra landscapes. To deduce the formation and alternation of ice-wedge polygons in the Kolyma Delta and in the Indigirka Lowland, we studied shallow cores, up to 1.3 m deep, from polygon center and rim locations. The formation of well-developed low-center polygons with elevated rims and wet centers is shown by the beginning of peat accumulation, increased organic matter contents, and changes in vegetation cover from Poaceae-, Alnus-, and Betula-dominated pollen spectra to dominating Cyperaceae and Botryoccocus presence, and Carex and Drepanocladus revolvens macro-fossils. Tecamoebae data support such a change from wetland to open-water conditions in polygon centers by changes from dominating eurybiontic and sphagnobiontic to hydrobiontic species assemblages. The peat accumulation indicates low-center polygon formation and started between 2380 +/- 30 and 1676 +/- 32 years before present (BP) in the Kolyma Delta. We recorded an opposite change from open-water to wetland conditions because of rim degradation and consecutive high-center polygon formation in the Indigirka Lowland between 2144 +/- 33 and 1632 +/- 32 years BP. The late Holocene records of polygon landscape development reveal changes in local hydrology and soil moisture.
Background: Compulsive exercise (CE) is a frequent symptom in patients with eating disorders (EDs). It includes, in addition to quantitatively excessive exercise behaviour, a driven aspect and specific motives of exercise. CE is generally associated with worse therapy outcomes. The aims of the study were to compare self-reported quantity of exercise, compulsiveness of exercise as well as motives for exercise between patients with anorexia nervosa (AN), bulimia nervosa (BN) and healthy controls (HC). Additionally, we wanted to explore predictors of compulsive exercise (CE) in each group.
Methods: We investigated 335 female participants (n = 226 inpatients, n = 109 HC) and assessed self-reported quantity of exercise, compulsiveness of exercise (Compulsive Exercise Test), motives for exercise (Exercise Motivations Inventory-2), ED symptoms (Eating Disorder Inventory-2), obsessive-compulsiveness (Obsessive-Compulsive Inventory-Revised), general psychopathology (Brief Symptom Inventory-18) and depression (Beck Depression Inventory-2).
Results: Both patients with AN and BN exercised significantly more hours per week and showed significantly higher CE than HC; no differences were found between patients with AN and BN. Patients with EDs and HC also partly varied in motives for exercise. Specific motives were enjoyment, challenge, recognition and weight management in patients with EDs in contrast to ill-health avoidance and affiliation in HC. Patients with AN and BN only differed in regard to exercise for appearance reasons in which patients with BN scored higher. The most relevant predictor of CE across groups was exercise for weight and shape reasons.
Conclusions: Exercise behaviours and motives differ between patients with EDs and HC. CE was pronounced in both patients with AN and BN. Therefore, future research should focus not only on CE in patients with AN, but also on CE in patients with BN. Similarities in CE in patients with AN and BN support a transdiagnostic approach during the development of interventions specifically targeting CE in patients with EDs.
Um beim Berufseinstieg erfolgreich als Informatiker wirken zu können, reicht es oft nicht aus nur separierte Kenntnisse über technische und theoretische Grundlagen, Programmiersprachen, Werkzeuge und Selbst- und Zeitmanagement zu besitzen. Vielmehr sollten Absolventen diese Kenntnisse praktisch miteinander verzahnt einsetzen können. An der Universität wird Studierenden leider selten die Möglichkeit geboten, diese verschiedenen Bereiche der Informatik miteinander integriert auszuüben. Dafür entwickeln wir seit über zwei Dekaden ein Lehr- und Lernkonzept zur Unterstützung praktischer Softwareentwicklungsveranstaltungen und setzen dieses um. Dadurch bieten wir angehenden SoftwareentwicklerInnen und ProjektmanagerInnen eine Umgebung, in der sie neues, praktisch relevantes Wissen erwerben können, sich selbst praktisch erproben und ihr Wissen konkret einsetzen können. Hier legen wir einen Schwerpunkt auf das Arbeiten im Team. Das hier vorgestellte Konzept kann auf ähnliche Lehrveranstaltungen übertragen und aufgrund seiner Modularisierung verändert und erweitert werden.
Missglückte Begegnung
(2018)
Die Bekanntschaft zwischen Leopold von Buch und Johann Wolfgang von Goethe war von Missverständnissen und Skepsis dem jeweils anderen gegenüber geprägt. Persönliche Gespräche über geologische Themen scheiterten, Briefe wurden verspätet abgeschickt oder kamen nicht an. Goethe lehnte den „Ultravulkanisten“ Buch ab, Buch Goethe als geologisch wenig kompetent. So endete beider Begegnung in Gesprächsverweigerung, zweideutigen Komplimenten und einer Korrespondenz von insgesamt nur zwei Briefen, die hier wiedergegeben werden.
The article describes the surface modification of 3D printed poly(lactic acid) (PLA) scaffolds with calcium phosphate (CP)/gelatin and CP/chitosan hybrid coating layers. The presence of gelatin or chitosan significantly enhances CP co-deposition and adhesion of the mineral layer on the PLA scaffolds. The hydrogel/CP coating layers are fairly thick and the mineral is a mixture of brushite, octacalcium phosphate, and hydroxyapatite. Mineral formation is uniform throughout the printed architectures and all steps (printing, hydrogel deposition, and mineralization) are in principle amenable to automatization. Overall, the process reported here therefore has a high application potential for the controlled synthesis of biomimetic coatings on polymeric biomaterials.
Ecological and physiological factors lead to different contamination patterns in individual marine mammals. The objective of the present study was to assess whether variations in contamination profiles are indicative of social structures of young male sperm whales as they might reflect a variation in feeding preferences and/or in utilized feeding grounds. We used a total of 61 variables associated with organic compounds and trace element concentrations measured in muscle, liver, kidney and blubber gained from 24 sperm whales that stranded in the North Sea in January and February 2016. Combining contaminant and genetic data, there is evidence for at least two cohorts with different origin among these stranded sperm whales; one from the Canary Island region and one from the northern part of the Atlantic. While genetic data unravel relatedness and kinship, contamination data integrate over areas, where animals occured during their lifetime. Especially in long-lived animals with a large migratory potential, as sperm whales, contamination data may carry highly relevant information about aggregation through time and space.
Der am 15. Juni 1875 in Frankfurt (Oder) geborene und langjährig in seiner Wahlheimat Potsdam praktizierende Allgemeinmediziner Georg Otto Schneider war einer der bedeutendsten ärztlichen Standesvertreter der ersten Hälfte des 20. Jahrhunderts. Eng verknüpft mit seinem Namen sind eine geradlinige, liberale Berufspolitik sowie die Entfaltung und der Erhalt beruflicher Selbstverwaltung in der brandenburgischen und gesamtdeutschen Ärzteschaft. Als führendes Mitglied in mehreren provinzialen und reichsweiten Verbänden engagierte sich Schneider über vier historische Epochen stets im Sinne einer freien Ausübung und autonomen Verwaltung des Arztberufes.
Im Deutschen Kaiserreich war Schneiders standespolitisches Handeln zunächst noch regional begrenzt. 1912 initiierte er die Errichtung eines Schutzverbandes für die Ärzte des Bezirks Potsdam, dem er über zehn Jahre vorsaß. In der Weimarer Republik stieg Schneider sodann zu einer Schlüsselfigur der Gesundheits- und ärztlichen Berufspolitik auf. 1920 belebte er den Ärzteverband für die Provinz Brandenburg, ab 1928 leitete er dazu in Personalunion die brandenburgische Ärztekammer. Bereits zwei Jahre zuvor hatte er die Geschäftsführung des Deutschen Ärztevereinsbundes übernommen. Infolge der Machtübernahme der Nationalsozialisten schied Schneider bis Mitte 1934 aus allen Ämtern aus, seine Bemühungen für den Erhalt der Berufsautonomie waren vergebens. Anders sah es zunächst nach Ende des Zweiten Weltkriegs aus. In der Sowjetischen Besatzungszone saß Schneider der Fachgruppe Ärzte im Freien Deutschen Gewerkschaftsbund Brandenburg vor und verteidigte die Möglichkeiten der selbstständigen Berufsverwaltung. Zudem war er von 1946 an bis zu seinem Tod am 26. Oktober 1949 Fraktionsvorsitzender der Liberal-Demokratischen Partei im brandenburgischen Landtag.
Vor dem Hintergrund des Lebens und Wirkens Georg Schneiders untersucht die Dissertation Kontinuitäten und Brüche im ärztlichen Organisationswesen, ausgehend vom Deutschen Kaiserreich über die Weimarer Epoche und den Nationalsozialismus bis hin zur Zeit der sowjetischen Besatzung. Die Arbeit stellt die Auswirkungen der jeweiligen politischen, sozioökonomischen und gesellschaftlichen Entwicklungen auf den Ärztestand und die entsprechenden Reaktionen der ärztlichen Berufsvertreter, allen voran Georg Schneiders, gegenüber. Dabei hinterfragt sie, inwiefern sich die ärztlichen Organisationsstrukturen dem jeweiligen System anpassten und welchen Einfluss Schneider als einzelne Person in den größeren Institutionen entfalten konnte.
Diverse Entwicklungen der letzten Jahrzehnte zeigten die Relevanz am Diskurs um eine sogenannte „nachhaltige Entwicklung“ auf. Nachhaltiger Entwicklung wird dabei eine immer größere Bedeutung zugesprochen und zudem wird die Bildung als eine der wichtigsten Kräfte, um eine nachhaltige Entwicklung voranzutreiben, angesehen. Im Rahmen der Bachelorarbeit soll deshalb untersucht werden, welches Verständnis Schülerinnen und Schüler vom Begriff Nachhaltigkeit haben. Zunächst wird der theoretische Hintergrund zu nachhaltiger Entwicklung und einer „Bildung für nachhaltige Entwicklung“ geklärt. Auf Basis dieser theoretischen Fundierung wird dann ein leitfadengestütztes Interview entwickelt. Aus den Ergebnissen sollen unter Verwendung der zusammenfassenden Inhaltsanalyse nach Mayring Rückschlüsse über das Verständnis der Schüler*innen gezogen werden. Auf der Basis der Ergebnisse und Interpretationen sollen abschließend Überlegungen gemacht werden, wie das Verständnis der Schüler*innen erweitert werden kann. Im Rahmen der Untersuchung wurden schließlich sechs Schülerinnen und Schüler der Jahrgangsstufe zehn einer Gesamtschule mit einem Interview befragt. Es wurde festgestellt, dass ein Verständnis von Nachhaltigkeit nur bei vier der sechs Befragten vorhanden war und auch dort größtenteils in Bezug auf ökologische und soziale Aspekte. Dabei konnten das persönliche Interesse, der Lebensweltbezug, und auch der Unterricht als Grund für beide Seiten ausgemacht werden.
Gesetzgebungsmehrheiten in parlamentarischen Systemen mit ihrem Dualismus aus Regierungslager und Oppositionsparteien bilden sich nicht frei. Vielmehr findet ihre Koordination in einem Spannungsfeld aus den programmatischen Positionen der Akteure und ihrem opportunistischen Wettbewerb untereinander statt. Diese Problematik bricht die Arbeit auf drei konkrete Fragestellungen herunter, im Rahmen derer sie die Konfliktmuster zwischen Akteuren bei der legislativen Mehrheitskoordination unter Mehrheitsregierungen in den deutschen Landesparlamenten untersucht: 1) Inwieweit hängt es von programmatischen Positionen oder vom opportunistischen Wettbewerb des Neuen Dualismus zwischen Regierungslager und Oppositionsparteien ab, ob Oppositionsparteien und Regierungslager bei der Bildung von Gesetzgebungsmehrheiten kooperieren oder konfligieren? 2) Inwieweit kommt es vor dem Hintergrund unterschiedlicher programmatischer Positionen und opportunistischer Überlegungen zu Konflikt statt Kooperation zwischen Koalitionsakteuren bei der Bildung gemeinsamer Gesetzgebungsmehrheiten? Letztere Fragestellung wird sodann auch in den Kontext des bundesrepublikanischen Kooperativföderalismus eingebettet: 3) Inwieweit geht die Bildung von Gesetzgebungsmehrheiten bei der Ausführung von Bundesgesetzen in Mischkoalitionen (bestehend aus Parteien, die sich auf Bundesebene in konkurrierenden Lagern gegenüberstehen) mit mehr Konflikt einher als in ebenenübergreifend kongruenten Regierungskoalitionen?
Theoretisch wird ein rationalistisches Modell der grundlegenden Handlungsanreize bei der Bildung von Gesetzgebungsmehrheiten in den deutschen Landesparlamenten erarbeitet. Auf dieser Basis beschäftigt sich die Arbeit damit, wie die Akteure strategisch programmatische und opportunistische Anreize zu Konflikt und Kooperation abwägen. Die Arbeit leitet dann konkrete Determinanten ab, die vorwiegend – aber nicht nur – mittels quantitativer Methoden getestet werden. Die Arbeit stützt sich dabei auf eine größtenteils neu zusammengestellte Gesetzgebungsdatenbank aus 3.359 Gesetzgebungsvorgängen aus 23 Legislaturperioden zwischen 1990 und 2013 in den Ländern Hamburg, Hessen, Mecklenburg-Vorpommern, Nordrhein-Westfalen und Sachsen-Anhalt.
Die Analyse der Konfliktmuster zwischen Oppositionsparteien und Regierungslager zeigt, dass programmatische Distanz einer Oppositionspartei zum Regierungslager für Oppositionsverhalten eine Rolle spielt; dies gilt jedoch auch für opportunistische Aspekte (so lässt sich beispielsweise ein kompetitiveres Oppositionsverhalten beobachten, wenn nach der letzten Wahl ein vollständiger Regierungswechsel erfolgte). Oppositionsverhalten erscheint dabei recht kleinteilig ausgeprägt. Neben Unterschieden zwischen Legislaturperioden treten solche auch innerhalb von Legislaturperioden zwischen Akteuren sowie zwischen Gesetzentwürfen auf. Die Analyse generellen Koalitionskonflikts weist darauf hin, dass ein nicht unerheblicher Teil von Koalitionskonflikt strukturell bedingt ist. Handelt es sich bei einer gebildeten Regierungskoalition um die Wunschkoalition der beteiligten Parteien, so ist dies Koalitionskonflikt abträglich. Selbiges gilt für eine größere Mehrheitsmarge des Regierungslagers. Darüber hinaus ergeben sich Hinweise, dass die Ausführung von Bundesgesetzen unter Mischkoalitionen bei bundespolitischer Abgrenzung der Koalitionspartner mit mehr Koalitionskonflikt einhergeht als eine Ausführung unter kongruenten Koalitionen.
Der Beitrag der Arbeit ist polymorph angelegt. Sie hilft zunächst, die Strategien von Akteuren im Gesetzgebungsprozess besser zu verstehen. Als normativer Beitrag tritt auf einer zweiten Ebene die bessere Erforschung etwaiger nachteiliger Effekte des Neuen Dualismus unter Mehrheitsregierungen hinzu. Gleichzeitig soll die Arbeit drittens in der Zusammenschau helfen, die Mechanik der parlamentarischen Systeme in den Ländern selbst zu erhellen und besser normativ bewerten zu können. Hintergrund sind hier die jahrzehntealten Debatten um das beste Regierungssystem und -format der deutschen Länder als subnationale Entitäten. Die dritte Fragestellung dieser Arbeit konnte diese Debatte zudem mit einem neuen Aspekt bereichern. Wissen darüber, inwieweit die Ausführung von Bundesgesetzen in den Ländern je nach ebenenübergreifendem Koalitionsmuster in unterschiedlichem Ausmaß mit einem ‚coalition governance‘-Problem verbunden ist, fügt der Forschung zum föderalen Entscheiden in der Bundesrepublik eine neue und beachtenswerte Facette hinzu. Denn dabei handelt es sich um eine föderal bedingte mechanische Beeinträchtigung der Mehrheitskoordination in den Landesparlamenten selbst, die die potenziell gegebene föderale Flexibilität bei der Ausführung von Bundesgesetzen hemmt. Dies ebnet den Weg zu neuen Debatten darüber, wie in den deutschen Ländern mehr legislative Abstimmungsflexibilität ermöglicht werden kann als unter den bisher üblichen Mehrheits-Koalitionsregierungen.
Im historischen Zentrum der mittelalterlichen Stadt Capua hat sich mit den drei Kirchen S. Salvatore „Maggiore“ a Corte, S. Giovanni a Corte und S. Michele a Corte eine Gruppe von Sakralbauten erhalten, die nicht nur durch ihre übereinstimmende namentliche Attribution einen Zusammenhang mit dem langobardischen Fürstenhof der Stadt offenbaren, sondern auch durch die räumliche Disposition im urbanistischen Gefüge. Im vorliegenden Buch wird die überkommene Bausubstanz einer grundlegenden Analyse unterzogen, um herauszuarbeiten, welche Bestandteile den ältesten Bauphasen zuzuordnen sind und somit als langobardenzeitlich angesprochen werden können. Eine ausführliche Untersuchung der zugehörigen Bauplastik ergänzt gleichwertig diesen ersten Teil. Die Kontextualisierung der Ergebnisse hilft dabei, ein Bild von der Kunst und Architektur des in Süditalien an Monumenten eher armen 10. Jahrhunderts zu generieren und erlaubt Rückschlüsse auf den geistigen Hintergrund, vor dem die drei Hofkirchen entstanden sind.
Fragestellung: Ziel war die Untersuchung des Verlaufs von Kindern mit Rechenstörungen bzw. Rechenschwächen. Neben der Persistenz wurden Auswirkungen von Rechenproblemen auf künftige Rechenleistungen sowie den Schulerfolg geprüft. Methodik: Für 2909 Schüler der 2. bis 5. Klasse liegen die Resultate standardisierter Rechen- und Intelligenztests vor. Ein Teil dieser Kinder ist nach 37 und 68 Mona-ten erneut untersucht worden. Ergebnisse: Die Prävalenz von Rechenstörungen betrug 1.4 %, Rechenschwächen traten bei 11.2 % auf. Rechen-probleme zeigten eine mittlere bis hohe Persistenz. Schüler mit Rechenschwäche blieben im Rechnen gut eine Standardabweichung hinter durchschnittlich und ca. eine halbe Standardabweichung hinter unterdurchschnittlich intelligenten Kontrollkindern zurück. Der allgemeine Schulerfolg rechenschwacher Probanden (definiert über Mathematiknote, Deutschnote und Schultyp) ähnelte dem der unterdurchschnittlich intelligenten Kontrollgruppe und blieb hinter dem Schulerfolg durchschnittlich intelligenter Kontrollkinder zurück. Eingangs ältere Probanden mit Rechenproblemen (4. bis 5. Klasse) wiesen eine schlechtere Prognose auf als Kinder, die zu Beginn die 2. oder 3. Klasse besuchten. Schluss-folgerungen: Rechenprobleme stellen ein ernsthaftes Entwicklungsrisiko dar. Längsschnittuntersuchungen, die Kinder mit streng definierter Rechenstörung bis ins Erwachsenenalter begleiten und Prädiktoren für unterschiedlich erfolgreiche Verläufe ermitteln, sind dringend notwendig.
In dieser Arbeit steht die Entwicklung einer Sensorplattform für biochemische Anwendungen, welche auf einem optischen Detektionsprinzips beruht, im Vordergrund. Während der Entwicklung wurden zwei komplementäre Konzeptideen behandelt, zum einen ein Sensor, der auf photonischen Kristallen und Wellenleiterstrukturen basiert und zum anderen einen faserbasierten Sensor, der chemisch modifizierte Faser-Bragg-Gitter enthält. Das optische Detektionsprinzip in beiden Sensorideen ist die resultierende Brechungsindexänderung als messbare physikochemische Kenngröße.
Das aus der Natur bekannte Phänomen der photonischen Kristalle, das u. a. bei Opalen und bei Schmetterlingen zu finden ist, wurde bereits 1887 von Lord Rayleigh beschrieben. Er beschrieb die optischen Eigenschaften von periodischen mehrschichtigen Filmen, welche als vereinfachtes Modell eines eindimensionalen photonischen Kristalls verstanden werden können. Die Periodizität der Brechungsindexänderung resultiert in einem optischen Filter für Frequenzen in einem bestimmten spektralen Bereich, weshalb dann dort keine Lichtausbreitung mehr möglich ist. Wird dieses System aber durch eine Defektstelle in der Brechungsindexperiodizität gestört, sodass daraus zwei perfekt periodische Systeme entstehen, ist die Lichtausbreitung für eine bestimmte Frequenz dennoch möglich. In der Folge resultiert daraus ein schmalbandiges Signal im Transmissionsspektrum. Die erlaubte Frequenz ist dabei u. a. abhängig vom Brechungsindexunterschied des periodischen Systems, d.h. Veränderung des Brechungsindexes einer Schicht führt zu einer spektralen Verschiebung der erlaubten Frequenz, dadurch kann dieses Sensorkonzept für biochemische Sensorik ausgenutzt werden [1]. Diese Entwicklung des auf photonischen Kristallen basierenden Sensors war eine Kooperation mit dem Industriepartner „Nanoplus GmbH“. In der Doktorarbeit wurden Simulationen und praktischen Arbeiten zur Designentwicklung des Sensors und die Arbeiten an einem ersten Modellaufbau für die biochemischen Anwendungen durchgeführt.
Für den faserbasierten Sensor wurden Faser-Bragg-Gitter in den Faserkern hineingeschrieben. Hill et al. entdeckten 1978, dass solche Gitterstrukturen genau wie photonische Kristalle als optische Filter fungieren [2]. Die Gitter bestehen dabei aus Änderungen des Brechungsindexes im Faserkern. Im Laufe der nächsten vierzig Jahren wurden verschiedene Einschreibetechniken und Gitterstrukturen entwickelt, weshalb die Eigenschaften der jeweiligen Gitterstrukturen variieren. Eine solche Gitterstruktur sind u. a. die Faser-Bragg-Gitter, deren Gitterperiode, d. h. die Abstände der Brechungsindexmodifikationen, sich im Nanometer- bis Mikrometerbereich befinden. Aufgrund der kleinen Gitterperiode wird eine rückwärtsführende Welle im Kern für eine bestimmte Frequenz bzw. Wellenlänge, der Bragg-Wellenlänge, erzeugt. Im Endeffekt resultiert daraus ein schmalbandiges Signal sowohl im Transmissionsspektrum, als auch im Reflexionsspektrum. Die Resonanzwellenlänge ist dabei proportional zu der Gitterperiode und dem effektiven Brechungsindex, welcher vom Brechungsindex des Kerns und des kernumgebenen Materials abhängig ist. Letztlich eignet sich diese Technik für physikochemische Sensorik. Im Rahmen dieser Arbeit wurden die Gitter mit Hilfe einer relativen neuen Herstellungsmethode in die Fasern geschrieben [3]. Anschließend stand die Entwicklung eines Biosensors im Vordergrund, wobei zunächst ein Protokoll zum Ätzen der Faser mit Flusssäure entwickelt worden ist, dass das System sensitiv zum umgebenen Brechungsindex macht. Am Ende wurde ein Modellaufbau realisiert, indem ein Modellsystem, hier die Detektion vom C-reaktiven Protein mittels spezifischen einzelsträngigen DNS-Aptameren, erfolgreich getestet und quantifiziert worden ist.
1 Mandal, S.; Erickson, D. Nanoscale Optofluidic Sensor Arrays. Opt. Express 2008, 16 (3), 1623–1631.
2 Hill, K. O.; Fujii, Y.; Johnson, D. C.; Kawasaki, B. S. Photosensitivity in Optical Fiber Waveguides: Application to Reflection Filter Fabrication. Appl. Phys. Lett. 1978, 32 (10), 647–649.
3 Martínez, A.; Dubov, M.; Khrushchev, I.; Bennion, I. Direct Writing of Fibre Bragg Gratings by Femtosecond Laser. Electron. Lett. 2004, 40 (19), 1170.
Femtosecond-pulsed laser written and etched fiber bragg gratings for fiber-optical biosensing
(2018)
We present the development of a label-free, highly sensitive fiber-optical biosensor for online detection and quantification of biomolecules. Here, the advantages of etched fiber Bragg gratings (eFBG) were used, since they induce a narrowband Bragg wavelength peak in the reflection operation mode. The gratings were fabricated point-by-point via a nonlinear absorption process of a highly focused femtosecond-pulsed laser, without the need of prior coating removal or specific fiber doping. The sensitivity of the Bragg wavelength peak to the surrounding refractive index (SRI), as needed for biochemical sensing, was realized by fiber cladding removal using hydrofluoric acid etching. For evaluation of biosensing capabilities, eFBG fibers were biofunctionalized with a single-stranded DNA aptamer specific for binding the C-reactive protein (CRP). Thus, the CRP-sensitive eFBG fiber-optical biosensor showed a very low limit of detection of 0.82 pg/L, with a dynamic range of CRP detection from approximately 0.8 pg/L to 1.2 µg/L. The biosensor showed a high specificity to CRP even in the presence of interfering substances. These results suggest that the proposed biosensor is capable for quantification of CRP from trace amounts of clinical samples. In addition, the adaption of this eFBG fiber-optical biosensor for detection of other relevant analytes can be easily realized.
Systems biology aims at investigating biological systems in its entirety by gathering and analyzing large-scale data sets about the underlying components. Computational systems biology approaches use these large-scale data sets to create models at different scales and cellular levels. In addition, it is concerned with generating and testing hypotheses about biological processes. However, such approaches are inevitably leading to computational challenges due to the high dimensionality of the data and the differences in the dimension of data from different cellular layers.
This thesis focuses on the investigation and development of computational approaches to analyze metabolite profiles in the context of cellular networks. This leads to determining what aspects of the network functionality are reflected in the metabolite levels. With these methods at hand, this thesis aims to answer three questions: (1) how observability of biological systems is manifested in metabolite profiles and if it can be used for phenotypical comparisons; (2) how to identify couplings of reaction rates from metabolic profiles alone; and (3) which regulatory mechanism that affect metabolite levels can be distinguished by integrating transcriptomics and metabolomics read-outs.
I showed that sensor metabolites, identified by an approach from observability theory, are more correlated to each other than non-sensors. The greater correlations between sensor metabolites were detected both with publicly available metabolite profiles and synthetic data simulated from a medium-scale kinetic model. I demonstrated through robustness analysis that correlation was due to the position of the sensor metabolites in the network and persisted irrespectively of the experimental conditions. Sensor metabolites are therefore potential candidates for phenotypical comparisons between conditions through targeted metabolic analysis.
Furthermore, I demonstrated that the coupling of metabolic reaction rates can be investigated from a purely data-driven perspective, assuming that metabolic reactions can be described by mass action kinetics. Employing metabolite profiles from domesticated and wild wheat and tomato species, I showed that the process of domestication is associated with a loss of regulatory control on the level of reaction rate coupling. I also found that the same metabolic pathways in Arabidopsis thaliana and Escherichia coli exhibit differences in the number of reaction rate couplings.
I designed a novel method for the identification and categorization of transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approach determines the partial correlation of metabolites with control by the principal components of the transcript levels. The principle components contain the majority of the transcriptomic information allowing to partial out the effect of the transcriptional layer from the metabolite profiles. Depending whether the correlation between metabolites persists upon controlling for the effect of the transcriptional layer, the approach allows us to group metabolite pairs into being associated due to post-transcriptional or transcriptional regulation, respectively. I showed that the classification of metabolite pairs into those that are associated due to transcriptional or post-transcriptional regulation are in agreement with existing literature and findings from a Bayesian inference approach.
The approaches developed, implemented, and investigated in this thesis open novel ways to jointly study metabolomics and transcriptomics data as well as to place metabolic profiles in the network context. The results from these approaches have the potential to provide further insights into the regulatory machinery in a biological system.
Schomburgk’s Chook
(2018)
Focusing on the politics of museums, collections and the untold
stories of the scientific ‘specimens’ that travelled between
Germany and Australia, this article reconstructs the historical,
interpersonal and geopolitical contexts that made it possible for
the stuffed skin of an Australian malleefowl to become part of the
collections of Berlin’s Museum für Naturkunde. The author enquires into the kinds of contexts that are habitually considered
irrelevant when a specimen of natural history is treated as an
object of taxonomic information only. In case of this particular
specimen human and non-human history become entangled in ways that link the fate of this one small Australian bird to the German revolutionary generation of 1848, to Germany’s
nineteenth-century colonial aspirations, to settler–Indigenous
relations, to the cruel realities that underpinned the production of
scientific knowledge in colonial Australia, and to a present-day
interest in reconstructing Indigenous knowledges.
Historical narratives play an important role in constructing contemporary notions of citizenship. They are sites on which ideas of the nation are not only reaffirmed but also contested and reframed. In contemporary Germany, dominant narratives of the country’s modern history habitually focus on the legacy of the Third Reich and tend to marginalize the country’s rich and highly complex histories of immigration. The article addresses this commemorative void in relation to Berlin’s urban landscape. It explores how the city’s multilayered architecture provides locations for the articulation of marginal memories—and hence sites of urban citizenship—that are often denied to immigrant communities on a national scale. Through a detailed examination of a small celebration in 1965 that marked the anniversary of the founding of the modern Turkish republic, the article engages with the layers of history that coalesce around such sites in Berlin.
In einem nicht genau datierten Brief an seinen Freund, den Bankier Alexander Mendelssohn, zeigte sich Humboldt bestürzt über einen dreisten Raub von Gold, Silber und Edelsteinen aus dem Mineralogischen Museum in Berlin. Mit Hilfe von Zeitungsmeldungen über dieses Aufsehen erregende Verbrechen konnte Humboldts Brief genau datiert werden. Im dem auf diesen Fund folgenden und zuerst 1983 veröffentlichten Aufsatz des Berliner Mineralogen Günter Hoppe werden die Tat und deren Aufklärung geschildert.
The hydrolytic stability of polymers to be used for coatings in aqueous environments, for example, to confer anti-fouling properties, is crucial. However, long-term exposure studies on such polymers are virtually missing. In this context, we synthesized a set of nine polymers that are typically used for low-fouling coatings, comprising the well-established poly(oligoethylene glycol methylether methacrylate), poly(3-(N-2-methacryloylethyl-N,N-dimethyl) ammoniopropanesulfonate) (“sulfobetaine methacrylate”), and poly(3-(N-3-methacryamidopropyl-N,N-dimethyl)ammoniopropanesulfonate) (“sulfobetaine methacrylamide”) as well as a series of hitherto rarely studied polysulfabetaines, which had been suggested to be particularly hydrolysis-stable. Hydrolysis resistance upon extended storage in aqueous solution is followed by ¹H NMR at ambient temperature in various pH regimes. Whereas the monomers suffered slow (in PBS) to very fast hydrolysis (in 1 M NaOH), the polymers, including the polymethacrylates, proved to be highly stable. No degradation of the carboxyl ester or amide was observed after one year in PBS, 1 M HCl, or in sodium carbonate buffer of pH 10. This demonstrates their basic suitability for anti-fouling applications. Poly(sulfobetaine methacrylamide) proved even to be stable for one year in 1 M NaOH without any signs of degradation. The stability is ascribed to a steric shielding effect. The hemisulfate group in the polysulfabetaines, however, was found to be partially labile.
The lateral and vertical temperature distribution in Oman is so far only poorly understood, particularly in the area between Muscat and the Batinah coast, which is the area of this study and which is composed of Cenozoic sediments developed as part of a foreland basin of the Makran Thrust Zone. Temperature logs (T-logs) were run and physical rock properties of the sediments were analyzed to understand the temperature distribution, thermal and hydraulic properties, and heat-transport processes within the sedimentary cover of northern Oman. An advective component is evident in the otherwise conduction-dominated geothermal play system, and is caused by both topography and density driven flow. Calculated temperature gradients (T-gradients) in two wells that represent conductive conditions are 18.7 and 19.5 degrees C km(-1), corresponding to about 70-90 degrees C at 2000-3000 m depth. This indicates a geothermal potential that can be used for energy intensive applications like cooling or water desalinization. Sedimentation in the foreland basin was initiated after the obduction of the Semail Ophiolite in the late Campanian, and reflects the complex history of alternating periods of transgressive and regressive sequences with erosion of the Oman Mountains. Thermal and hydraulic parameters were analyzed of the basin's heterogeneous clastic and carbonate sedimentary sequence. Surface heat-flow values of 46.4 and 47.9 mW m(-2) were calculated from the T-logs and calculated thermal conductivity values in two wells. The results of this study serve as a starting point for assessing different geothermal applications that may be suitable for northern Oman.
Stuck in the past?
(2018)
After the Civil War the Spanish army functioned as a guardian of domestic order, but suffered from antiquated material and little financial means. These factors have been described as fundamental reasons for the army’s low potential wartime capability. This article draws on British and German sources to demonstrate how Spanish military culture prevented an augmented effectiveness and organisational change. Claiming that the army merely lacked funding and modern equipment, falls considerably short in grasping the complexities of military effectiveness and organisational cultures, and might prove fatal for current attempts to develop foreign armed forces in conflict or post-conflict zones.
On 6 June 1982, Israel invaded Lebanon to fight the Palestinian
Liberation Organization (PLO). Between August 1982 and February
1984, the US, France, Britain and Italy deployed a Multinational
Force (MNF) to Beirut. Its task was to act as an interposition force to
bolster the government and to bring peace to the people. The
mission is often forgotten or merely remembered in context with
the bombing of US Marines’ barracks. However, an analysis of the
Italian contingent shows that the MNF was not doomed to fail and
could accomplish its task when operational and diplomatic efforts
were coordinated. The Italian commander in Beirut, General Franco
Angioni, followed a successful approach that sustained neutrality,
respectful behaviour and minimal force, which resulted in a
qualified success of the Italian efforts.
Forging an Italian hero?
(2018)
Over the last two decades, Amedeo Guillet (1909–2010) has been turned into a public and military hero. His exploits as a guerrilla leader in Italian East Africa in 1941 have been exaggerated to forge a narrative of an honourable resistance against overwhelming odds. Thereby, Guillet has been showcased as a romanticized colonial explorer who was an apolitical and timeless Italian officer. He has been compared to Lawrence of Arabia in order to raise his international visibility, while his genuine Italian brand is perpetuated domestically. By elevating him to an official role model, the Italian Army has gained a focal point for military heroism that was also acceptable in the public memory as the embodiment of a ‘glorious’ defeat narrative.
Um für ein Leben in der digitalen Gesellschaft vorbereitet zu sein, braucht jeder heute in verschiedenen Situationen umfangreiche informatische Grundlagen. Die Bedeutung von Informatik nimmt nicht nur in immer mehr
Bereichen unseres täglichen Lebens zu, sondern auch in immer mehr Ausbildungsrichtungen. Um junge Menschen auf ihr zukünftiges Leben und/oder ihre zukünftige berufliche Tätigkeit vorzubereiten, bieten verschiedene Hochschulen Informatikmodule für Studierende anderer Fachrichtungen an. Die Materialien jener Kurse bilden einen umfangreichen Datenpool, um die für Studierende anderer Fächer bedeutenden Aspekte der Informatik mithilfe eines empirischen Ansatzes zu identifizieren. Im Folgenden werden 70 Module zu informatischer Bildung für Studierende anderer Fachrichtungen analysiert. Die Materialien – Publikationen, Syllabi und Stundentafeln – werden zunächst mit einer qualitativen Inhaltsanalyse nach Mayring untersucht und anschließend quantitativ ausgewertet. Basierend auf der Analyse werden Ziele, zentrale Themen und Typen eingesetzter Werkzeuge identifiziert.
Die 8. Fachtagung für Hochschuldidaktik der Informatik (HDI) fand im September 2018 zusammen mit der Deutschen E-Learning Fachtagung Informatik (DeLFI) unter dem gemeinsamen Motto „Digitalisierungswahnsinn? - Wege der Bildungstransformationen“ in Frankfurt statt.
Dabei widmet sich die HDI allen Fragen der informatischen Bildung im Hochschulbereich. Schwerpunkte bildeten in diesem Jahr u. a.:
- Analyse der Inhalte und anzustrebenden Kompetenzen in Informatikveranstaltungen
- Programmieren lernen & Einstieg in Softwareentwicklung
- Spezialthemen: Data Science, Theoretische Informatik und Wissenschaftliches Arbeiten
Die Fachtagung widmet sich ausgewählten Fragestellungen dieser Themenkomplexe, die durch Vorträge ausgewiesener Experten und durch eingereichte Beiträge intensiv behandelt werden.
Spotlight on islands
(2018)
Groups of proximate continental islands may conceal more tangled phylogeographic patterns than oceanic archipelagos as a consequence of repeated sea level changes, which allow populations to experience gene flow during periods of low sea level stands and isolation by vicariant mechanisms during periods of high sea level stands. Here, we describe for the first time an ancient and diverging lineage of the Italian wall lizard Podarcis siculus from the western Pontine Islands. We used nuclear and mitochondrial DNA sequences of 156 individuals with the aim of unraveling their phylogenetic position, while microsatellite loci were used to test several a priori insular biogeographic models of migration with empirical data. Our results suggest that the western Pontine populations colonized the islands early during their Pliocene volcanic formation, while populations from the eastern Pontine Islands seem to have been introduced recently. The inter-island genetic makeup indicates an important role of historical migration, probably due to glacial land bridges connecting islands followed by a recent vicariant mechanism of isolation. Moreover, the most supported migration model predicted higher gene flow among islands which are geographically arranged in parallel. Considering the threatened status of small insular endemic populations, we suggest this new evolutionarily independent unit be given priority in conservation efforts.
Im Fokus der bildungshistorischen und doppelbiografischen Dissertation steht die Darstellung des vielseitigen Reformengagements der Torhorst-Schwestern Adelheid und Marie im (Aus-) Bildungswesen der Weimarer Republik. Die Begriffe „Reform“ und „Engagement“ stellen tragende inhaltliche Signaturen der quellenbasierten Annäherung an das Geschwisterpaar dar. Thematisiert werden ihre Berufsbiografien in ihren jeweiligen bildungspolitischen sowie bildungspraktischen Wirkungskreisen – inmitten der ersten „echten“ deutschen Demokratie. Die Studie zielt insbesondere darauf ab, den Kreis der bildungshistorischen Repräsentantinnen für eine konstruktive Ausgestaltung des (Fort-)Bildungswesens im Sinne eines notwendigen, aber nicht realisierten Modernisierungs- und Demokratisierungsprozesses in jenem Zeitraum zu erweitern. Die Aufarbeitung des bisher in der bildungsgeschichtlichen Forschung weitestgehend unbekannten Schaffens vermag es, den vielschichtigen Bedeutungsebenen von Schulreform(en) und Reformpädagogik gerecht(er) zu werden. Die Arbeit intendiert zudem eine Horizonterweiterung des bildungshistorischen Blickfeldes – vor allem in Bezug auf bildungspolitische und schulpraktische Realisierungen von essenziellen Reformen in den Bereichen sekundärer (Aus-)Bildungseinrichtungen.
Die Schwestern bestimmten sowohl als kommunalpolitische als auch als schulpraktische Akteure die neue Praxis und die neuen Anforderungen der demokratischen Staatsform mit. Adelheid Torhorst kämpfte – über ihren kommunalen Verantwortungsradius hinausreichend – aktiv im Rahmen ihrer von 1924–1931 andauernden Mitgliedschaft im Bund der Freien Schulgesellschaften (BFS) für eine gesamte Weltlichkeit des deutschen Schul- und (Aus-)Bildungswesens. Beide Frauen mussten auf ihren Tätigkeitsebenen erfahren, dass ihre immer stärker werdenden sozialistisch geprägten Vorstellungen bezüglich der deutschen Bildungslandschaft Illusionen blieben. Vielmehr erkannten sie zunehmend einen Zusammenhang zwischen den etablierten Machtstrukturen; ein gesellschaftlicher Fortschritt, der sich in ihren Augen in einem sozial durchlässigen und weltlichen Bildungswesen formierte, erforderte vor allem strukturelle Veränderungen. Für diese jedoch gab es keine gesellschaftlichen und politischen Mehrheiten.
Die doppelbiografische Perspektive mit dem Fokus auf das Bleibende des Reformengagements sensibilisiert für gegenwärtige bildungspolitische Streitfragen. Der kritisch-reflexive Blick geht zunächst mit einer Würdigung der qualifizierenden deutschen (Aus-) Bildungslandschaft einher; schätzt die liberalen Errungenschaften wie die Entscheidungsfreiheit der Eltern in Bezug zum Besuch des Religionsunterrichts ihrer Kinder als ein Privileg einer demokratischen, sozial offenen Gesellschaft. Es braucht in einer herausfordernden Zukunft mehr denn je mutige Akteure mit progressivem Reformpotenzial. Das wegweisende Engagement der Torhorst-Schwestern stand im Kontext eines schulischen sowie gesellschaftlichen Fortschrittgedankens, der sowohl die Moderne positiv prägt und trägt, der aber auch für deren Krisen und Konflikte steht. Im gegenwärtigen (Aus-)Bildungswesen entstehen ebenso immer wieder neue Spannungen und Reformbedarfe, die es gilt, mit entsprechenden bildungspolitischen Richtlinien „von oben“ gesetzlich neu zu reglementieren – mit Leben gefüllt werden sie mit dem Engagement „von unten“.
The present article offers a mixed-method perspective on the
investigation of determinants of effectiveness in quality assurance
at higher education institutions. We collected survey data from
German higher education institutions to analyse the degree to
which quality managers perceive their approaches to quality
assurance as effective. Based on this data, we develop an ordinary
least squares regression model which explains perceived
effectiveness through structural variables and certain quality
assurance-related activities of quality managers. The results show
that support by higher education institutions’ higher management
and cooperation with other education institutions are relevant
preconditions for larger perceived degrees of quality assurance
effectiveness. Moreover, quality managers’ role as promoters of
quality assurance exhibits significant correlations with perceived
effectiveness. In contrast, sanctions and the perception of quality
assurance as another administrative burden reveal negative
correlations.
We provide a detailed stochastic description of the swimming motion of an E. coli bacterium in two dimension, where we resolve tumble events in time. For this purpose, we set up two Langevin equations for the orientation angle and speed dynamics. Calculating moments, distribution and autocorrelation functions from both Langevin equations and matching them to the same quantities determined from data recorded in experiments, we infer the swimming parameters of E. coli. They are the tumble rate lambda, the tumble time r(-1), the swimming speed v(0), the strength of speed fluctuations sigma, the relative height of speed jumps eta, the thermal value for the rotational diffusion coefficient D-0, and the enhanced rotational diffusivity during tumbling D-T. Conditioning the observables on the swimming direction relative to the gradient of a chemoattractant, we infer the chemotaxis strategies of E. coli. We confirm the classical strategy of a lower tumble rate for swimming up the gradient but also a smaller mean tumble angle (angle bias). The latter is realized by shorter tumbles as well as a slower diffusive reorientation. We also find that speed fluctuations are increased by about 30% when swimming up the gradient compared to the reversed direction.
As national efforts to reduce CO2 emissions intensify, policy-makers need increasingly specific, subnational information about the sources of CO2 and the potential reductions and economic implications of different possible policies. This is particularly true in China, a large and economically diverse country that has rapidly industrialized and urbanized and that has pledged under the Paris Agreement that its emissions will peak by 2030. We present new, city level estimates of CO2 emissions for 182 Chinese cities, decomposed into 17 different fossil fuels, 46 socioeconomic sectors, and 7 industrial processes. We find that more affluent cities have systematically lower emissions per unit of gross domestic product (GDP), supported by imports from less affluent, industrial cities located nearby. In turn, clusters of industrial cities are supported by nearby centers of coal or oil extraction. Whereas policies directly targeting manufacturing and electric power infrastructure would drastically undermine the GDP of industrial cities, consumption based policies might allow emission reductions to be subsidized by those with greater ability to pay. In particular, sector based analysis of each city suggests that technological improvements could be a practical and effective means of reducing emissions while maintaining growth and the current economic structure and energy system. We explore city-level emission reductions under three scenarios of technological progress to show that substantial reductions (up to 31%) are possible by updating a disproportionately small fraction of existing infrastructure.
Background: Flooding during seasonal monsoons affects millions of hectares of rice-cultivated areas across Asia. Submerged rice plants die within a week due to lack of oxygen, light and excessive elongation growth to escape the water. Submergence tolerance was first reported in an aus-type rice landrace, FR13A, and the ethylene-responsive transcription factor (TF) gene SUB1A-1 was identified as the major tolerance gene. Intolerant rice varieties generally lack
the SUB1A gene but some intermediate tolerant varieties, such as IR64, carry the allelic variant SUB1A-2. Differential effects of the two alleles have so far not been addressed. As a first step, we have therefore quantified and compared the expression of nearly 2500 rice TF genes between IR64 and its derived tolerant near isogenic line IR64-Sub1, which carries the SUB1A-1 allele. Gene expression was studied in internodes, where the main difference in expression between
the two alleles was previously shown.
Results: Nineteen and twenty-six TF genes were identified that responded to submergence in IR64 and IR64-Sub1,
respectively. Only one gene was found to be submergence-responsive in both, suggesting different regulatory pathways under submergence in the two genotypes. These differentially expressed genes (DEGs) mainly included MYB, NAC, TIFY and Zn-finger TFs, and most genes were downregulated upon submergence. In IR64, but not in IR64-Sub1,
SUB1B and SUB1C, which are also present in the Sub1 locus, were identified as submergence responsive. Four TFs were not submergence responsive but exhibited constitutive, genotype-specific differential expression. Most of the identified submergence responsive DEGs are associated with regulatory hormonal pathways, i.e. gibberellins (GA), abscisic acid (ABA), and jasmonic acid (JA), apart from ethylene. An in-silico promoter analysis of the two genotypes revealed the
presence of allele-specific single nucleotide polymorphisms, giving rise to ABRE, DRE/CRT, CARE and Site II cis-elements, which can partly explain the observed differential TF gene expression.
Conclusion: This study identified new gene targets with the potential to further enhance submergence tolerance in rice and provides insights into novel aspects of SUB1A-mediated tolerance.
Admixture is the hybridization between populations within one species. It can increase plant fitness and population viability by alleviating inbreeding depression and increasing genetic diversity. However, populations are often adapted to their local environments and admixture with distant populations could break down local adaptation by diluting the locally adapted genomes. Thus, admixed genotypes might be selected against and be outcompeted by locally adapted genotypes in the local environments. To investigate the costs and benefits of admixture, we compared the performance of admixed and within-population F1 and F2 generations of the European plant Lythrum salicaria in a reciprocal transplant experiment at three European field sites over a 2-year period. Despite strong differences between site and plant populations for most of the measured traits, including herbivory, we found limited evidence for local adaptation. The effects of admixture depended on experimental site and plant population, and were positive for some traits. Plant growth and fruit production of some populations increased in admixed offspring and this was strongest with larger parental distances. These effects were only detected in two of our three sites. Our results show that, in the absence of local adaptation, admixture may boost plant performance, and that this is particularly apparent in stressful environments. We suggest that admixture between foreign and local genotypes can potentially be considered in nature conservation to restore populations and/or increase population viability, especially in small inbred or maladapted populations.
Understanding of wave environments is critical for the understanding of how particles are accelerated and lost in space. This study shows that in the vicinity of Europa and Ganymede, that respectively have induced and internal magnetic fields, chorus wave power is significantly increased. The observed enhancements are persistent and exceed median values of wave activity by up to 6 orders of magnitude for Ganymede. Produced waves may have a pronounced effect on the acceleration and loss of particles in the Jovian magnetosphere and other astrophysical objects. The generated waves are capable of significantly modifying the energetic particle environment, accelerating particles to very high energies, or producing depletions in phase space density. Observations of Jupiter's magnetosphere provide a unique opportunity to observe how objects with an internal magnetic field can interact with particles trapped in magnetic fields of larger scale objects.
The variabilities of the semidiurnal solar and lunar tides of the equatorial electrojet (EEJ) are investigated during the 2003, 2006, 2009 and 2013 major sudden stratospheric warming (SSW) events in this study. For this purpose, ground-magnetometer recordings at the equatorial observatories in Huancayo and Fúquene are utilized. Results show a major enhancement in the amplitude of the EEJ semidiurnal lunar tide in each of the four warming events. The EEJ semidiurnal solar tidal amplitude shows an amplification prior to the onset of warmings, a reduction during the deceleration of the zonal mean zonal wind at 60∘ N and 10 hPa, and a second enhancement a few days after the peak reversal of the zonal mean zonal wind during all four SSWs. Results also reveal that the amplitude of the EEJ semidiurnal lunar tide becomes comparable or even greater than the amplitude of the EEJ semidiurnal solar tide during all these warming events. The present study also compares the EEJ semidiurnal solar and lunar tidal changes with the variability of the migrating semidiurnal solar (SW2) and lunar (M2) tides in neutral temperature and zonal wind obtained from numerical simulations at E-region heights. A better agreement between the enhancements of the EEJ semidiurnal lunar tide and the M2 tide is found in comparison with the enhancements of the EEJ semidiurnal solar tide and the SW2 tide in both the neutral temperature and zonal wind at the E-region altitudes.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
The 1920s witnessed a growing appearance of individual American Jews–
largely from wealthy and prominent families – who received training by Asian teachers and pursued Buddhist practices in Asian-founded Buddhist groups. Some of these American Jews gained prominence and leadership status in Buddhist communities and also ran their own semi-established Buddhist groups, with limited success. The social position and material success of these Jewish Buddhists allowed them the time and means to study and practice Buddhism. This paper illustrates these developments through the story of Julius Goldwater, a member of the prominent German Jewish family that included Senator Barry Goldwater. After encountering Buddhism in Hawaii and being ordained in Kyoto, Goldwater moved to Los Angeles to become one of the first European-American Jodo Shinshu ministers in America. This paper demonstrates how he was an early convert, teacher, and wartime proponent of American Buddhism.
Правовое регулирование финансового контроля в сфере денежного обращения в Российской Федерации
(2018)
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.
It is well-documented that strength training (ST) improves measures of muscle strength in young athletes. Less is known on transfer effects of ST on proxies of muscle power and the underlying dose-response relationships. The objectives of this meta-analysis were to quantify the effects of ST on lower limb muscle power in young athletes and to provide dose-response relationships for ST modalities such as frequency, intensity, and volume. A systematic literature search of electronic databases identified 895 records. Studies were eligible for inclusion if (i) healthy trained children (girls aged 6–11 y, boys aged 6–13 y) or adolescents (girls aged 12–18 y, boys aged 14–18 y) were examined, (ii) ST was compared with an active control, and (iii) at least one proxy of muscle power [squat jump (SJ) and countermovement jump height (CMJ)] was reported. Weighted mean standardized mean differences (SMDwm) between subjects were calculated. Based on the findings from 15 statistically aggregated studies, ST produced significant but small effects on CMJ height (SMDwm = 0.65; 95% CI 0.34–0.96) and moderate effects on SJ height (SMDwm = 0.80; 95% CI 0.23–1.37). The sub-analyses revealed that the moderating variable expertise level (CMJ height: p = 0.06; SJ height: N/A) did not significantly influence ST-related effects on proxies of muscle power. “Age” and “sex” moderated ST effects on SJ (p = 0.005) and CMJ height (p = 0.03), respectively. With regard to the dose-response relationships, findings from the meta-regression showed that none of the included training modalities predicted ST effects on CMJ height. For SJ height, the meta-regression indicated that the training modality “training duration” significantly predicted the observed gains (p = 0.02), with longer training durations (>8 weeks) showing larger improvements. This meta-analysis clearly proved the general effectiveness of ST on lower-limb muscle power in young athletes, irrespective of the moderating variables. Dose-response analyses revealed that longer training durations (>8 weeks) are more effective to improve SJ height. No such training modalities were found for CMJ height. Thus, there appear to be other training modalities besides the ones that were included in our analyses that may have an effect on SJ and particularly CMJ height. ST monitoring through rating of perceived exertion, movement velocity or force-velocity profile could be promising monitoring tools for lower-limb muscle power development in young athletes.
More than a billion people rely on water from rivers sourced in High Mountain Asia (HMA), a significant portion of which is derived from snow and glacier melt. Rural communities are heavily dependent on the consistency of runoff, and are highly vulnerable to shifts in their local environment brought on by climate change. Despite this dependence, the impacts of climate change in HMA remain poorly constrained due to poor process understanding, complex terrain, and insufficiently dense in-situ measurements.
HMA's glaciers contain more frozen water than any region outside of the poles. Their extensive retreat is a highly visible and much studied marker of regional and global climate change. However, in many catchments, snow and snowmelt represent a much larger fraction of the yearly water budget than glacial meltwaters. Despite their importance, climate-related changes in HMA's snow resources have not been well studied.
Changes in the volume and distribution of snowpack have complex and extensive impacts on both local and global climates. Eurasian snow cover has been shown to impact the strength and direction of the Indian Summer Monsoon -- which is responsible for much of the precipitation over the Indian Subcontinent -- by modulating earth-surface heating. Shifts in the timing of snowmelt have been shown to limit the productivity of major rangelands, reduce streamflow, modify sediment transport, and impact the spread of vector-borne diseases. However, a large-scale regional study of climate impacts on snow resources had yet to be undertaken.
Passive Microwave (PM) remote sensing is a well-established empirical method of studying snow resources over large areas. Since 1987, there have been consistent daily global PM measurements which can be used to derive an estimate of snow depth, and hence snow-water equivalent (SWE) -- the amount of water stored in snowpack. The SWE estimation algorithms were originally developed for flat and even terrain -- such as the Russian and Canadian Arctic -- and have rarely been used in complex terrain such as HMA.
This dissertation first examines factors present in HMA that could impact the reliability of SWE estimates. Forest cover, absolute snow depth, long-term average wind speeds, and hillslope angle were found to be the strongest controls on SWE measurement reliability. While forest density and snow depth are factors accounted for in modern SWE retrieval algorithms, wind speed and hillslope angle are not. Despite uncertainty in absolute SWE measurements and differences in the magnitude of SWE retrievals between sensors, single-instrument SWE time series were found to be internally consistent and suitable for trend analysis.
Building on this finding, this dissertation tracks changes in SWE across HMA using a statistical decomposition technique. An aggregate decrease in SWE was found (10.6 mm/yr), despite large spatial and seasonal heterogeneities. Winter SWE increased in almost half of HMA, despite general negative trends throughout the rest of the year. The elevation distribution of these negative trends indicates that while changes in SWE have likely impacted glaciers in the region, climate change impacts on these two pieces of the cryosphere are somewhat distinct.
Following the discussion of relative changes in SWE, this dissertation explores changes in the timing of the snowmelt season in HMA using a newly developed algorithm. The algorithm is shown to accurately track the onset and end of the snowmelt season (70% within 5 days of a control dataset, 89% within 10). Using a 29-year time series, changes in the onset, end, and duration of snowmelt are examined. While nearly the entirety of HMA has experienced an earlier end to the snowmelt season, large regions of HMA have seen a later start to the snowmelt season. Snowmelt periods have also decreased in almost all of HMA, indicating that the snowmelt season is generally shortening and ending earlier across HMA.
By examining shifts in both the spatio-temporal distribution of SWE and the timing of the snowmelt season across HMA, we provide a detailed accounting of changes in HMA's snow resources. The overall trend in HMA is towards less SWE storage and a shorter snowmelt season. However, long-term and regional trends conceal distinct seasonal, temporal, and spatial heterogeneity, indicating that changes in snow resources are strongly controlled by local climate and topography, and that inter-annual variability plays a significant role in HMA's snow regime.
Rabbi Eliyahu Eliezer Dessler (1892–1953) is often portrayed as antagonistic to secular studies. However, his writings show more of an intellectual hierarchy that places Torah wisdom at the top and all other wisdom a distant second. R. Dessler expended great effort promoting Torah scholarship while generally refraining from disparaging secular studies. Looking at the writings of his predecessors in the Mussar (moralist) movement, one can see that there was no disapproval of worldly education there, either: In fact, R. Dessler and his predecessors were well-educated in many secular disciplines. This essay looks to places R. Dessler’s attitude toward Wissenschaft des Judentums within the context of his life’s mission to advance talmudic study and his consequent unwillingness to countenance anything that detracted from furthering the learning of Torah. I argue that, whereas his extreme opposition to Wissenschaft was the result of his aversion to its aims, methods and conclusions, his nuanced relationship to Orthodox Wissenschaft was the result of the hierarchy through which he viewed secular as opposed to talmudic study.
Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential(Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.
Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker‐assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT‐PCR and GC‐MS profiling, respectively. Transcript marker candidates were selected from a published RNA‐Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions.
Gershom Scholem (1897–1982) portrayed modern Zionist historical scholarship as both a rejection and a corrective fulfillment of earlier eras of Wissenschaft des Judentums. Through attacks on his scholarly predecessors, Scholem detailed his vision for the potential of this renaissance of Wissenschaft to entail both objective research and a commitment to treating Judaism as a “living organism,” an approach that would ultimately ensure the scholarship could deliver value to the Jewish community. This article will explore the tensions that arise from Scholem’s commitments, his occasional admissions of these tensions, and his attempts to overcome them.
The forcing from the anthropogenic heat flux (AHF), i.e. the dissipation of primary energy consumed by the human civilisation, produces a direct climate warming. Today, the globally averaged AHF is negligibly small compared to the indirect forcing from greenhouse gas emissions. Locally or regionally, though, it has a significant impact. Historical observations show a constant exponential growth of worldwide energy production. A continuation of this trend might be fueled or even amplified by the exploration of new carbon-free energy sources like fusion power. In such a scenario, the impacts of the AHF become a relevant factor for anthropogenic post-greenhouse gas climate change on the global scale, as well.
This master thesis aims at estimating the climate impacts of such a growing AHF forcing. In the first part of this work, the AHF is built into simple and conceptual, zero- and one-dimensional Energy Balance Models (EBMs), providing quick order of magnitude estimations of the temperature impact. In the one-dimensional EBM, the ice-albedo feedback from enhanced ice melting due to the AHF increases the temperature impact significantly compared to the zero-dimensional EBM.
Additionally, the forcing is built into a climate model of intermediate complexity, CLIMBER-3α. This allows for the investigation of the effect of localised AHF and gives further insights into the impact of the AHF on processes like the ocean heat uptake, sea ice and snow pattern changes
and the ocean circulation.
The global mean temperature response from the AHF today is of the order of 0.010 − 0.016 K in all reasonable model configurations tested. A transient tenfold increase of this forcing heats up the Earth System additionally by roughly 0.1 − 0.2 K in the presented models. Further growth
can also affect the tipping probability of certain climate elements.
Most renewable energy sources do not or only partially contribute to the AHF forcing as the energy from these sources dissipates anyway. Hence, the transition to a (carbon-free) renewable energy mix, which, in particular, does not rely on nuclear power, eliminates the local and global climate impacts from the increasing AHF forcing, independent of the growth of energy production.
Rezensiertes Werk:
Tania Fabricius, Die Aufarbeitung von in Kolonialkriegen begangenem Unrecht: Anwendbarkeit
und Anwendung internationaler Regeln des bewaffneten Konflikts und nationalen
Militärrechts auf Geschehnisse in europäischen Kolonialgebieten in Afrika. Schriften zum
Völkerrecht 223, Berlin: Duncker & Humblot, 2017, 405 Seiten, ISBN 978-3-428-15011-3
Ziel der vorliegenden Bachelorarbeit ist die Untersuchung der Funktion und Wirkung von Ironie in der öffentlich-kritischen Politikrezension. Wie kann eine vermehrte Verwendung von Ironie in der deutschen Berichterstattung das Denken und Sprechen über politisches Geschehen, die freie Meinungs- und Urteilsbildung bis hin zu Entscheidungsfindungen beeinflussen? Als Resultat der qualitativen Dokumentenanalyse wird letztlich ein operationalisiertes Schema konzipiert, welches die genaue Einordnung unterschiedlicher Einsatzmöglichkeiten von Ironie in den öffentlichen Medien auf Textebene erlaubt und in ihrer argumentativen Funktion und komischen Wirkkraft den Einfluss identifiziert, den Ironie auf die jeweilige Sachdebatte nimmt.
Um die zwiespältige Rolle von Ironie im Sprachgebrauch genauer zu bestimmen, wo sie sowohl als nützliches Ausdrucksmittel für die Widersprüchlichkeiten im komplexen gesellschaftspolitischen Geschehen als auch als Ohnmachtsreaktion auf deren Unauflösbarkeit erscheinen kann, nähert sich die Arbeit der Ironie zunächst über ihre epistemologische Geschichte und rhetorische Grunddefinition an. Ironie sagt immer etwas und zugleich etwas anderes. Sie eröffnet somit ein Bedeutungsfeld in der Spannung verschiedener oder gar entgegengesetzter Pole und lässt mehrere unvereinbare Interpretationen zu. Dieses besondere Stilmittel kann also zu einer differenzierten, multiperspektivischen Betrachtung genutzt werden. Oder aber gerade dazu, klare Positionen zu vermeiden und den Rückzug in alternative Auslegungsmöglichkeit einer Aussage offen zu halten.
Im Weiteren sind drei große, epochale Strömungen zu unterscheiden, die in der Ironie eine umfassende Geisteshaltung und erkenntnistheoretische Position verstanden beziehungsweise entwickelt haben: Einzeln erörtert werden die philosophische Verstellungstechnik der sokratischen Ironie, die poetisch-ästhetischen Darstellungsverfahren der romantischen Ironie sowie die kritische Selbstbetrachtung der modernen Ironie. Diese loten aus, ob sich eine ironische Geisteshaltung als differenzierte Annäherung an die komplexe Wahrheit des Menschen oder im Gegenteil als irrationaler Flucht- oder Irrweg entpuppt.
Die zweite Säule der Analyse betrachtet die komische Wirkung von Ironie und die Bedeutung des Lachens für den Menschen, der als einziges Lebewesen zu solch einer Reaktion fähig ist. Wann lacht der Mensch und was drückt er damit aus: Hilflosigkeit an den Grenzen seines sozialen Verhaltensspektrums oder Souveränität im Umgang mit einer ungewohnten Situation? Komik muss in ihrer medialen Anwendung von Unterhaltungskultur bis seriöser Berichterstattung situiert und Ironie klar von anderen komischen Figuren wie der Satire oder dem Sarkasmus unterschieden werden. Ihrer komischen Komponente steht ihr Anteil am Tragischen gegenüber, beide spielen zuweilen zusammen.
Im Rückbezug auf den Einfluss einer ironischen Sprachwahl oder Geisteshaltung auf die Rezension und damit auf die Wahrnehmung des politischen Geschehens finden sich beide Eingangsthesen bestätigt: Ironie kann hilfreiches Ausdrucksmittel oder Symptom gesellschaftlicher Ohnmacht sein. Rhetorisch dient sie als Kampfmittel im Politikdiskurs oder zur Ridikülisierung der Gegenposition. Indem sie stets mehrere Bedeutungsebenen eröffnet und andere oder gar gegensätzliche Denk- und Seinsweisen zugleich in den Blick nimmt, hat Ironie das Potential, politische sowie kulturelle Ideale und Richtlinien neu in Frage zu stellen. In ihrer destruktiven Kraft, Widersprüche anzuzeigen, steckt somit eine aufklärerische Funktion zur Entlarvung von Irrtümern oder Erschließung alternativer Ansätze. Andererseits kann eine Ironisierung in der Betrachtung und Bewertung von Politik auch auf den Wirklichkeits- oder Identitätsverlust einer Gesellschaft hinweisen, wenn nämlich die Lebensrealität angesichts gänzlich unterschiedlicher, doch ebenso denkbarer Organisationsformen und Weltbilder ihre Überzeugungskraft einbüßt. In der modernen Ironie bietet sich wiederum die Chance, diese Relativität von Werten und Normen als Schlüsselerlebnis der eigenen Zeit konstruktiv aufzugreifen.
Das Unterfangen dieser Bachelorarbeit kulminiert darin, all jene diversen Arten und Ebenen von Ironie in einem einzigen Analyseschema nach operationalisierten Kriterien der Linguistik, Rhetorik, Literaturwissenschaft, Philosophie und Ästhetik bestimmbar zu machen. Im zweiten Schritt wendet das Schema diese Forschungsergebnisse schließlich für eine politikwissenschaftliche Einordnung der Funktion und Wirkung von Ironie bei ihrer Verwendung in der öffentlich-kritischen Politikrezension durch textbasierte Medien an. Hierin könnte ein erster Grundstein für eine politische Theorie der Ironie liegen. Das Schema wäre künftig für eine umfassende, quantitative empirische Untersuchung über den Gebrauch von Ironie in den Pressebeiträgen deutscher Medien in der politikwissenschaftlichen Forschung einsetzbar.
Aus dem Inhalt:
▪ Auf dem Weg zu einem menschenrechtlichen Schutz vor Abschiebung für Schwerkranke? Anmerkungen zum Paposhvili-Urteil des EGMR
▪ Der EGMR und die Derogation von Menschenrechten – Werden Notstände zu akzeptierten Dauerzuständen in Zeiten des Terrorismus?
▪ EGMR: Wolter und Sarfert ./. Deutschland – Ungleichbehandlung ehelicher und nichtehelicher Kinder im Erbrecht
There is evidence for cortical contribution to the regulation of human postural control. Interference from concurrently performed cognitive tasks supports this notion, and the lateral prefrontal cortex (lPFC) has been suggested to play a prominent role in the processing of purely cognitive as well as cognitive-postural dual tasks. The degree of cognitive-motor interference varies greatly between individuals, but it is unresolved whether individual differences in the recruitment of specific lPFC regions during cognitive dual tasking are associated with individual differences in cognitive-motor interference. Here, we investigated inter-individual variability in a cognitive-postural multitasking situation in healthy young adults (n = 29) in order to relate these to inter-individual variability in lPFC recruitment during cognitive multitasking. For this purpose, a oneback working memory task was performed either as single task or as dual task in order to vary cognitive load. Participants performed these cognitive single and dual tasks either during upright stance on a balance pad that was placed on top of a force plate or during fMRI measurement with little to no postural demands. We hypothesized dual one-back task performance to be associated with lPFC recruitment when compared to single one-back task performance. In addition, we expected individual variability in lPFC recruitment to be associated with postural performance costs during concurrent dual one-back performance. As expected, behavioral performance costs in postural sway during dual-one back performance largely varied between individuals and so did lPFC recruitment during dual one-back performance. Most importantly, individuals who recruited the right mid-lPFC to a larger degree during dual one-back performance also showed greater postural sway as measured by larger performance costs in total center of pressure displacements. This effect was selective to the high-load dual one-back task and suggests a crucial role of the right lPFC in allocating resources during cognitivemotor interference. Our study provides further insight into the mechanisms underlying cognitive-motor multitasking and its impairments.
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
TerraSAR-X time series fill a gap in spaceborne snowmelt monitoring of small Arctic catchments
(2018)
The timing of snowmelt is an important turning point in the seasonal cycle of small Arctic catchments. The TerraSAR-X (TSX) satellite mission is a synthetic aperture radar system (SAR) with high potential to measure the high spatiotemporal variability of snow cover extent (SCE) and fractional snow cover (FSC) on the small catchment scale. We investigate the performance of multi-polarized and multi-pass TSX X-Band SAR data in monitoring SCE and FSC in small Arctic tundra catchments of Qikiqtaruk (Herschel Island) off the Yukon Coast in the Western Canadian Arctic. We applied a threshold based segmentation on ratio images between TSX images with wet snow and a dry snow reference, and tested the performance of two different thresholds. We quantitatively compared TSX- and Landsat 8-derived SCE maps using confusion matrices and analyzed the spatiotemporal dynamics of snowmelt from 2015 to 2017 using TSX, Landsat 8 and in situ time lapse data. Our data showed that the quality of SCE maps from TSX X-Band data is strongly influenced by polarization and to a lesser degree by incidence angle. VH polarized TSX data performed best in deriving SCE when compared to Landsat 8. TSX derived SCE maps from VH polarization detected late lying snow patches that were not detected by Landsat 8. Results of a local assessment of TSX FSC against the in situ data showed that TSX FSC accurately captured the temporal dynamics of different snow melt regimes that were related to topographic characteristics of the studied catchments. Both in situ and TSX FSC showed a longer snowmelt period in a catchment with higher contributions of steep valleys and a shorter snowmelt period in a catchment with higher contributions of upland terrain. Landsat 8 had fundamental data gaps during the snowmelt period in all 3 years due to cloud cover. The results also revealed that by choosing a positive threshold of 1 dB, detection of ice layers due to diurnal temperature variations resulted in a more accurate estimation of snow cover than a negative threshold that detects wet snow alone. We find that TSX X-Band data in VH polarization performs at a comparable quality to Landsat 8 in deriving SCE maps when a positive threshold is used. We conclude that TSX data polarization can be used to accurately monitor snowmelt events at high temporal and spatial resolution, overcoming limitations of Landsat 8, which due to cloud related data gaps generally only indicated the onset and end of snowmelt.
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
Professionelle GT Langstreckenmotorsportler (Rennfahrer) müssen den hohen motorischen und kognitiven Ansprüchen ohne Verlust der Performance während eines Rennens endgegenwirken können. Sie müssen stets, bei hoher Geschwindigkeit fokussiert und konzentriert auf ihr Auto, die Rennstrecke und ihre Gegner reagieren können. Darüber hinaus sind Rennfahrer zusätzlich durch die notwendige Kommunikation im Auto mit den Ingenieuren und Mechanikern in der Boxengasse gefordert. Daten über die tatsächliche Beanspruchung und häufig auftretende Beschwerden und/oder Verletzung von Profiathleten liegen kaum vor. Für eine möglichst gute Performance im Auto während eines Rennens ist es notwendige neben der körperlichen Beanspruchung auch die häufigen Krankheitsbilder zu kennen. Auf Basis dessen kann eine optimale Prävention oder notwendige Therapie zur möglichst schnellen Reintegration in den Sport abgeleitet und entwickelt werden. Die vorliegende Arbeit befasst sich durch ein regelmäßiges Gesundheitsmonitoring mit der Erfassung häufiger Beschwerden und oder Verletzungen im GT Langestreckenmotorsport zur Ableitung eines präventiven (trainingstherapeutischen) und therapeutischen Konzeptes. Darüber hinaus, soll über die Einschätzung der körperlichen Leistungsfähigkeit der Athleten, auf Basis der Beanspruchung im Rennfahrzeug ein mögliches Trainingskonzept in Abhängigkeit der Saison entwickelt werden.
Insgesamt wurden über 15 Jahre (2003-2017) 37 männliche Athleten aus dem GT Langstreckenmotorsport 353mal im Rahmen eines Gesundheitsmonitorings untersucht. Dabei wurden Athleten maximal 14 Jahre und mindestens 1 Jahr sportmedizinische betreut. Diese 2x im Jahr stattfindende Untersuchung beinhaltete im Wesentlichen eine sportmedizinische Untersuchung zur Einschätzung der Tauglichkeit für den Sport und die Erfassung der körperlichen Leistungsfähigkeit. Über das Gesundheitsmonitoring hinaus erfolgte die Betreuung zusätzlich an der Rennstrecke zur weiteren Erfassung der Beschwerden, Erkrankungen und Verletzungen der Athleten während ihrer sportartspezifischen Belastung. Zusammengefasst zeigen die Athleten geringe Prävalenzen und Inzidenzen der Krankheitsbilder bzw. Beschwerden. Ein Unterschied der Prävalenzen zeigt sich zwischen den Gesundheitsuntersuchungen und der Betreuung an der Rennstrecke. Die häufigsten Beschwerdebilder zeigen sich aus Orthopädie und Innerer Medizin. So sind Infekte der oberen Atemwege sowie Allergien neben Beschwerden der unteren Extremität und der Wirbelsäule am häufigsten. Demzufolge werden vorrangig physio- und trainingstherapeutische Konsequenzen abgeleitet. Eine medikamentöse Therapie erfolgt im Wesentlichen während der Rennbetreuung. Zur Reduktion der orthopädischen und internistischen Beschwerden sollten präventive Maßnahmen mehr betont werden. Die körperliche Leistungsfähigkeit zeigt im Wesentlichen über die Untersuchungsjahre eine stabile Performance für die Ausdauer-, Kraft und sensomotorische Leistungsfähigkeit. Die Ausdauerleistungsfähigkeit kann in Abhängigkeit der Sportartspezifik mit einer guten bis sehr guten Ausprägung definiert werden. Die Kraftleistungsfähigkeit und die sensomotorische Leistungsfähigkeit lassen sportartspezifische Unterschiede zu und sollte körpergewichtsbezogen betrachtet werden.
Ein sportmedizinisches und trainingstherapeutisches Konzept müsste demnach eine regelmäßige ärztlich-medizinische Untersuchung mit dem Fokus der Orthopädie, Inneren Medizin und Hals- Nasen-Ohren-Kunde beinhalten. Darüber hinaus sollte eine regelmäßige Erfassung der körperlichen Leistungsfähigkeit zur möglichst effektiven Ableitung von Trainingsinhalten oder Präventionsmaßnahmen berücksichtig werden. Auf Grundlage der hohen Reisetätigkeit und der ganzjährigen Saison könnte ein 1-2x jährlich stattfindendes Trainingslager, im Sinne eines Grundlagen- und Aufbautrainings zur Optimierung der Leistungsfähigkeit beitragen, das Konzept komplementieren. Zudem scheint eine ärztliche Rennbetreuung notwendig.
Fluvial terraces, floodplains, and alluvial fans are the main landforms to store sediments and to decouple hillslopes from eroding mountain rivers. Such low-relief landforms are also preferred locations for humans to settle in otherwise steep and poorly accessible terrain. Abundant water and sediment as essential sources for buildings and infrastructure make these areas amenable places to live at. Yet valley floors are also prone to rare and catastrophic sedimentation that can overload river systems by abruptly increasing the volume of sediment supply, thus causing massive floodplain aggradation, lateral channel instability, and increased flooding. Some valley-fill sediments should thus record these catastrophic sediment pulses, allowing insights into their timing, magnitude, and consequences.
This thesis pursues this theme and focuses on a prominent ~150 km2 valley fill in the Pokhara Valley just south of the Annapurna Massif in central Nepal. The Pokhara Valley is conspicuously broad and gentle compared to the surrounding dissected mountain terrain,
and is filled with locally more than 70 m of clastic debris. The area’s main river, Seti Khola, descends from the Annapurna Sabche Cirque at 3500-4500 m asl down to 900 m asl where it incises into this valley fill. Humans began to settle on this extensive
fan surface in the 1750’s when the Trans-Himalayan trade route connected the Higher Himalayas, passing Pokhara city, with the subtropical lowlands of the Terai. High and unstable river terraces and steep gorges undermined by fast flowing rivers with highly seasonal (monsoon-driven) discharge, a high earthquake risk, and a growing population make the Pokhara Valley an ideal place to study the recent geological and geomorphic history of its sediments and the implication for natural hazard appraisals.
The objective of this thesis is to quantify the timing, the sedimentologic and geomorphic processes as well as the fluvial response to a series of strong sediment pulses. I report
diagnostic sedimentary archives, lithofacies of the fan terraces, their geochemical provenance, radiocarbon-age dating and the stratigraphic relationship between them. All these various and independent lines of evidence show consistently that multiple sediment pulses filled the Pokhara Valley in medieval times, most likely in connection with, if not triggered by, strong seismic ground shaking. The geomorphic and sedimentary evidence is
consistent with catastrophic fluvial aggradation tied to the timing of three medieval Himalayan earthquakes in ~1100, 1255, and 1344 AD. Sediment provenance and calibrated radiocarbon-age data are the key to distinguish three individual sediment pulses, as these are not evident from their sedimentology alone. I explore various measures of adjustment and fluvial response of the river system following these massive aggradation pulses. By using proxies such as net volumetric erosion, incision and erosion rates, clast provenance on active river banks, geomorphic markers such as re-exhumed tree trunks in growth position, and knickpoint locations in tributary valleys, I estimate the response of the river network in the Pokhara Valley to earthquake disturbance over several centuries. Estimates of the removed volumes since catastrophic valley filling began, require average net sediment
yields of up to 4200 t km−2 yr−1 since, rates that are consistent with those reported for Himalayan rivers. The lithological composition of active channel-bed load differs from that of local bedrock material, confirming that rivers have adjusted 30-50% depending on data of different tributary catchments, locally incising with rates of 160-220 mm yr−1. In many tributaries to the Seti Khola, most of the contemporary river loads come from a Higher Himalayan source, thus excluding local hillslopes as sources. This imbalance in sediment provenance emphasizes how the medieval sediment pulses must have rapidly traversed up to 70 km downstream to invade the downstream reaches of the tributaries
up to 8 km upstream, thereby blocking the local drainage and thus reinforcing, or locally creating new, floodplain lakes still visible in the landscape today.
Understanding the formation, origin, mechanism and geomorphic processes of this valley fill is crucial to understand the landscape evolution and response to catastrophic sediment pulses. Several earthquake-triggered long-runout rock-ice avalanches or catastrophic dam burst in the Higher Himalayas are the only plausible mechanisms to explain both the geomorphic and sedimentary legacy that I document here. In any case, the Pokhara Valley was most likely hit by a cascade of extremely rare processes over some two centuries starting in the early 11th century. Nowhere in the Himalayas do we find valley fills of
comparable size and equally well documented depositional history, making the Pokhara Valley one of the most extensively dated valley fill in the Himalayas to date. Judging from the growing record of historic Himalayan earthquakes in Nepal that were traced and
dated in fault trenches, this thesis shows that sedimentary archives can be used to directly aid reconstructions and predictions of both earthquake triggers and impacts from a sedimentary-response perspective. The knowledge about the timing, evolution, and response of the Pokhara Valley and its river system to earthquake triggered sediment pulses is important to address the seismic and geomorphic risk for the city of Pokhara. This
thesis demonstrates how geomorphic evidence on catastrophic valley infill can help to independently verify paleoseismological fault-trench records and may initiate re-thinking on post-seismic hazard assessments in active mountain regions.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
Alexander von Humboldt hat auf der Grundlage des Wissens seiner Zeit in den Schlussfolgerungen zu seinen Messungen zu Boden, Luft und Wasser gelegentlich auch offene Fragen formuliert und Vermutungen geäußert. Es ist das Ziel dieses Beitrages, frühe Veröffentlichungen daraufhin zu prüfen, ob die damaligen Annahmen mit dem heutigen naturwissenschaftlichen Wissen bestätigt werden können. Nach einer Darstellung der „Umweltsituation“ um 1800 folgt eine Diskussion der Anleitungen zur Beeinflussung der Bodenfruchtbarkeit und zur Ertragssteigerung. Humboldt erkannte bei seinen Untersuchungen zu methanhaltigen Grubengasen und der Erdatmosphäre einige uns heute aus der Klimadiskussion bekannte Effekte (z. B. die Rolle von Spurengasen auf die Eigenschaften von Gasmischungen, die Existenz atmosphärischer Stoffkreisläufe). Weniger bekannt sind Humboldts umfassende praktische Anleitungen zum Bau des 50 km langen Entwässerungstunnels „Meissner Erbstolln“ unter geologischen, technischen, ökonomischen und soziologischen Aspekten. Die zentrale Rolle von „dynamischen“ Gleichgewichten wird am heutigen ökologischen Zustand des Sees von Valencia (Venezuela) erläutert.
Empirische Untersuchungen von Lückentext-Items zur Beherrschung der Syntax einer Programmiersprache
(2018)
Lückentext-Items auf der Basis von Programmcode können eingesetzt werden, um Kenntnisse in der Syntax einer Programmiersprache zu prüfen, ohne dazu komplexe Programmieraufgaben zu stellen, deren Bearbeitung weitere Kompetenzen erfordert. Der vorliegende Beitrag dokumentiert den Einsatz von insgesamt zehn derartigen Items in einer universitären Erstsemestervorlesung zur Programmierung mit Java. Es werden sowohl Erfahrungen mit der Konstruktion der Items als auch empirische Daten aus dem Einsatz diskutiert. Der Beitrag zeigt dadurch insbesondere die Herausforderungen bei der Konstruktion valider Instrumente zur Kompetenzmessung in der Programmierausbildung auf. Die begrenzten und teilweise vorläufigen Ergebnisse zur Qualität der erzeugten Items legen trotzdem nahe, dass Erstellung und Einsatz entsprechender Items möglich ist und einen Beitrag zur Kompetenzmessung leisten kann.
In der vorliegenden Arbeit werden Wege zur Gewinnung verschiedener phenolischer Substanzen wie Lignin, Diarylheptanoide und 4-(3-Oxobutyl)phenol (Himbeerketon) aus dem Stamm der Hängebirke (Betula pendula) aufgezeigt. Durch Methacrylierung des 4-(3-Oxobutyl)phenols wurde ein Monomer erzeugt, welches mittels freier radikalischer Masse- und Lösungspolymerisation, sowie enzymatischer Polymerisation polymerisiert werden kann.
Eine erste Isolierung von Bestandteilen wurde durch Extraktion von Innenholz bzw. Rinde mit Methanol erzielt. Die in Methanol unlöslichen Bestandteile des Innenholzes und der Rinde wurden anschließend mit ausgewählten ionischen Flüssigkeiten extrahiert. Es wurde ein Verfahren zum selektiven Trennen der mit diesen ionischen Flüssigkeiten extrahierten Bestandteile in Cellulose, Hemicellulose, Lignin und mit Ethylacetat extrahierbare Bestandteile entwickelt. Hierdurch war es möglich, sowohl die verwendeten ionischen Flüssigkeiten als auch das Innenholz und die Rinde hinsichtlich ihres Extraktionsverhaltens miteinander zu vergleichen.
Ferner wurden verschiedene Strategien aufgezeigt, um insgesamt drei Spezies an Diarylheptanoiden aus dem methanolischen Extrakt der Rinde zu isolieren. Eines der gefundenen Diarylheptanoide (5 Hydroxy-1,7-bis(4-hydroxyphenyl)-3-heptanon) wurde via Retroaldolreaktion in 4 (3 Oxobutyl)phenol (Himbeerketon) und 3 (4 Hydroxyphenyl)propanal gespalten.
Es wurde die Verwendung des 4-(3-Oxobutyl)phenol als Monomerbestandteil untersucht. Hierfür wurde 4-(3-Oxobutyl)phenylmethacrylat synthetisiert und Wege zur Reinigung mittels Säulenchromatographie und Umkristallisation aufgezeigt. Anschließend wurde Poly(4-(3-oxobutyl)phenylmethacrylat) (PObMA) und Polybenzylmethacrylats (PBzMA) aus Massen- und Lösungspolymerisation hergestellt. Die Ausbeuten an PObpMA im Vergleich zum PBzMA liegen bei gleichen Reaktionsbedingungen auf gleichem Niveau. Im Kontrast hierzu ist der Polymerisationsgrad aus freier radikalischer Polymerisation in Masse des PObpMA im Vergleich zum PBzMA um den Faktor 3,7 größer. Die Glasübergangstemperaturen des PObpMA liegen bei gleichen Reaktionsbedingungen sowohl bei freier radikalischer Polymerisation in Masse, als auch bei Lösungspolymerisation über denen des PBzMA. Darüber hinaus wurde die Polymerisation von 4-(3-Oxobutyl)phenylmethacrylat und Benzylmethacrylat mit einem Initiatorsystem bestehend aus Meerrettichperoxidase, Acetylaceton und Wasserstoffperoxid bei Raumtemperatur beschrieben. Die mit enzymatischem Initiatorsystem erzeugten Produkte zeigten starke Übereinstimmung mit Produkten aus Lösungspolymerisationen, welche mit Azobis(isobutyronitril) initiiert wurden.
Der Beitrag skizziert ein Modell, das die Entwicklung digitaler Kompetenzen im Lehramtsstudium fördern soll. Zwar wird das Kompetenzmodell aus der Deutschdidaktik heraus entwickelt, nimmt aber auch fachübergreifende Anforderungen in den Bereichen Informationskompetenz, medientechnischer Kompetenzen, Fähigkeiten der Medienanalyse und -reflexion sowie Sprachhandlungskompetenz in den Blick. Damit wird das Ziel verfolgt, die besonderen Anforderungen angehender Lehrkräfte als Mediator*innen digitaler Kompetenzen darzustellen. Das beschriebene Modell dieser Vermittlungskompetenz dient der Verankerung digitaler Lehr-Lernkonzepte als wesentlicher Bestandteil der modernen Lehrer*innenbildung.
Losses due to floods have dramatically increased over the past decades, and losses of companies, comprising direct and indirect losses, have a large share of the total economic losses. Thus, there is an urgent need to gain more quantitative knowledge about flood losses, particularly losses caused by business interruption, in order to mitigate the economic loss of companies. However, business interruption caused by floods is rarely assessed because of a lack of sufficiently detailed data. A survey was undertaken to explore processes influencing business interruption, which collected information on 557 companies affected by the severe flood in June 2013 in Germany. Based on this data set, the study aims to assess the business interruption of directly affected companies by means of a Random Forests model. Variables that influence the duration and costs of business interruption were identified by the variable importance measures of Random Forests. Additionally, Random Forest-based models were developed and tested for their capacity to estimate business interruption duration and associated costs. The water level was found to be the most important variable influencing the duration of business interruption. Other important variables, relating to the estimation of business interruption duration, are the warning time, perceived danger of flood recurrence and inundation duration. In contrast, the amount of business interruption costs is strongly influenced by the size of the company, as assessed by the number of employees, emergency measures undertaken by the company and the fraction of customers within a 50 km radius. These results provide useful information and methods for companies to mitigate their losses from business interruption. However, the heterogeneity of companies is relatively high, and sector-specific analyses were not possible due to the small sample size. Therefore, further sector-specific analyses on the basis of more flood loss data of companies are recommended.