Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
Das Ziel der Doktorarbeit war die Entwicklung und Evaluation eines skillsbasierten primären Präventionsprogramms (Mainzer Schultraining zur Essstörungsprävention (MaiStep)) für partielle und manifeste Essstörungen. Dabei wurde die Wirksamkeit durch einen primären (Reduktion vorhandener Essstörungssymptome) und sekundären (assoziierte Psychopathologie) Zielparameter 3 und 12 Monate nach Durchführung des Trainings überprüft. Innerhalb der randomisiert kontrollierten Studie gab es zwei Interventionsgruppe und eine aktive Kontrollgruppe. 1.654 Jugendliche (weiblich/männlich: 781/873; mittleres Alter: 13.1±0.7; BMI: 20.0±3.5) konnten für die Studie, an zufällig ausgewählten Schulen in Rheinland-Pfalz, rekrutiert werden. Die Entwicklung des Präventionsprogramms basiert auf einem systematischen Literaturreview von 63 wissenschaftlichen Studien über die Prävention von Essstörungen im Kindes- und Jugendalter. Eine Interventionsgruppe wurde durch Psychologinnen/Psychologen und eine zweite durch Lehrkräfte angeleitet. Das in der aktiven Kontrollgruppe durchgeführte Sucht- bzw. Stresspräventionsprogramm wurde durch Lehrkräfte geleitet. MaiStep zeigte zur 3-Monatskatamnese keine signifikanten Effekte im Vergleich zur aktiven Kontrollgruppe. Allerdings zeigten sich nach 12 Monaten multiple signifikante Effekte zwischen den Interventions- und der aktiven Kontrollgruppe. Im Rahmen der Analyse des primären Parameters wurden in den Interventionsgruppen signifikant weniger Jugendliche mit einer partiellen Anorexia nervosa (CHI²(2) = 8.74, p = .01**) und/oder partiellen Bulimia nervosa (CHI²(2) = 7.25, p = .02*) gefunden. Im Rahmen der sekundären Zielparameter zeigten sich signifikante Veränderungen in Subskalen des Eating Disorder Inventory (EDI-2) Schlankheitsstreben (F (2, 355) = 3.94, p = .02*) und Perfektionismus (F (2, 355) = 4.19, p = .01**) sowie dem Body Image Avoidance Questionnaire (BIAQ) (F (2, 525) = 18.79, p = .01**) zwischen den Interventions- und der aktiven Kontrollgruppe. MaiStep kann somit als erfolgreiches Programm zur Reduktion von partiellen Essstörungen für die Altersgruppe der 13- 15-jährigen bezeichnet werden. Trotz unterschiedlicher Wirkmechanismen zeigten sich die Lehrkräfte im Vergleich zu den Psychologinnen/Psychologen ebenso erfolgreich in der Durchführung.
Trial registration MaiStep is registered at the German Clinical Trials Register (DRKS00005050).
Das Wissen um die lokale Struktur von Seltenen Erden Elementen (SEE) in silikatischen und aluminosilikatischen Schmelzen ist von fundamentalem Interesse für die Geochemie der magmatischen Prozesse, speziell wenn es um ein umfassendes Verständnis der Verteilungsprozesse von SEE in magmatischen Systemen geht. Es ist allgemein akzeptiert, dass die SEE-Verteilungsprozesse von Temperatur, Druck, Sauerstofffugazität (im Fall von polyvalenten Kationen) und der Kristallchemie kontrolliert werden. Allerdings ist wenig über den Einfluss der Schmelzzusammensetzung selbst bekannt. Ziel dieser Arbeit ist, eine Beziehung zwischen der Variation der SEE-Verteilung mit der Schmelzzusammensetzung und der Koordinationschemie dieser SEE in der Schmelze zu schaffen.
Dazu wurden Schmelzzusammensetzungen von Prowatke und Klemme (2005), welche eine deutliche Änderung der Verteilungskoeffizienten zwischen Titanit und Schmelze ausschließlich als Funktion der Schmelzzusammensetzung zeigen, sowie haplogranitische bzw. haplobasaltische Schmelzzusammensetzungen als Vertreter magmatischer Systeme mit La, Gd, Yb und Y dotiert und als Glas synthetisiert. Die Schmelzen variierten systematisch im Aluminiumsättigungsindex (ASI), welcher bei den Prowatke und Klemme (2005) Zusammensetzungen einen Bereich von 0.115 bis 0.768, bei den haplogranitischen Zusammensetzungen einen Bereich von 0.935 bis 1.785 und bei den haplobasaltischen Zusammensetzungen einen Bereich von 0.368 bis 1.010 abdeckt. Zusätzlich wurden die haplogranitischen Zusammensetzungen mit 4 % H2O synthetisiert, um den Einfluss von Wasser auf die lokale Umgebung von SEE zu studieren. Um Informationen über die lokalen Struktur von Gd, Yb und Y zu erhalten wurde die Röntgenabsorptionsspektroskopie angewendet. Dabei liefert die Untersuchung der Feinstruktur mittels der EXAFS-Spektroskopie (engl. Extended X-Ray Absorption Fine Structure) quantitative Informationen über die lokale Umgebung, während RIXS (engl. resonant inelastic X-ray scattering), sowie die daraus extrahierte hoch aufgelöste Nahkantenstruktur, XANES (engl. X-ray absorption near edge structure) qualitative Informationen über mögliche Koordinationsänderungen von La, Gd und Yb in den Gläsern liefert. Um mögliche Unterschiede der lokalen Struktur oberhalb der Glastransformationstemperatur (TG) zur Raumtemperatur zu untersuchen, wurden exemplarisch Hochtemperatur Y-EXAFS Untersuchungen durchgeführt.
Für die Auswertung der EXAFS-Messungen wurde ein neu eingeführter Histogramm-Fit verwendet, der auch nicht-symmetrische bzw. nichtgaußförmige Paarverteilungsfunktionen beschreiben kann, wie sie bei einem hohen Grad der Polymerisierung bzw. bei hohen Temperaturen auftreten können. Die Y-EXAFS-Spektren für die Prowatke und Klemme (2005) Zusammensetzungen zeigen mit Zunahme des ASI, eine Zunahme der Asymmetrie und Breite der Y-O Paarverteilungsfunktion, welche sich in sich in der Änderung der Koordinationszahl von 6 nach 8 und einer Zunahme des Y-O Abstand um 0.13Å manifestiert. Ein ähnlicher Trend lässt sich auch für die Gd- und Yb-EXAFS-Spektren beobachten. Die hoch aufgelösten XANESSpektren für La, Gd und Yb zeigen, dass sich die strukturellen Unterschiede zumindest halb-quantitativ bestimmen lassen. Dies gilt insbesondere für Änderungen im mittleren Abstand zu den Sauerstoffatomen. Im Vergleich zur EXAFS-Spektroskopie liefert XANES jedoch keine Informationen über die Form und Breite von Paarverteilungsfunktionen. Die Hochtemperatur EXAFS-Untersuchungen von Y zeigen Änderungen der lokalen Struktur oberhalb der Glasübergangstemperatur an, welche sich vordergründig auf eine thermisch induzierte Erhöhung des mittleren Y-O Abstandes zurückführen lassen. Allerdings zeigt ein Vergleich der Y-O Abstände für Zusammensetzungen mit einem ASI von 0.115 bzw. 0.755, ermittelt bei Raumtemperatur und TG, dass der im Glas beobachtete strukturelle Unterschied entlang der Zusammensetzungsserie in der Schmelze noch stärker ausfallen kann, als bisher für die Gläser angenommen wurde.
Die direkte Korrelation der Verteilungsdaten von Prowatke und Klemme (2005) mit den strukturellen Änderungen der Schmelzen offenbart für Y eine lineare Korrelation, wohingegen Yb und Gd eine nicht lineare Beziehung zeigen. Aufgrund seines Ionenradius und seiner Ladung wird das 6-fach koordinierte SEE in den niedriger polymerisierten Schmelzen bevorzugt durch nicht-brückenbildende Sauerstoffatome koordiniert, um stabile Konfigurationen zu bilden. In den höher polymerisierten Schmelzen mit ASI-Werten in der Nähe von 1 ist 6-fache Koordination nicht möglich, da fast nur noch brückenbildende Sauerstoffatome zur Verfügung stehen. Die Überbindung von brückenbildenden Sauerstoffatomen um das SEE wird durch Erhöhung der Koordinationszahl und des mittleren SEE-O Abstandes ausgeglichen. Dies bedeutet eine energetisch günstigere Konfiguration in den stärker depolymerisierten Zusammensetzungen, aus welcher die beobachtete Variation des Verteilungskoeffizienten resultiert, welcher sich jedoch für jedes Element stark unterscheidet. Für die haplogranitischen und haplobasaltischen Zusammensetzungen wurde mit Zunahme der Polymerisierung auch eine Zunahme der Koordinationszahl und des durchschnittlichen Bindungsabstands, einhergehend mit der Zunahme der Schiefe und der Asymmetrie der Paarverteilungsfunktion, beobachtet. Dies impliziert, dass das jeweilige SEE mit Zunahme der Polymerisierung auch inkompatibler in diesen Zusammensetzungen wird. Weiterhin zeigt die Zugabe von Wasser, dass die Schmelzen depolymerisieren, was in einer symmetrischeren Paarverteilungsfunktion resultiert, wodurch die Kompatibilität wieder zunimmt.
Zusammenfassend zeigt sich, dass die Veränderungen der Schmelzzusammensetzungen in einer Änderung der Polymerisierung der Schmelzen resultieren, die dann einen signifikanten Einfluss auf die lokale Umgebung der SEE hat. Die strukturellen Änderungen lassen sich direkt mit Verteilungsdaten korrelieren, die Trends unterscheiden sich aber stark zwischen leichten, mittleren und schweren SEE. Allerdings konnte diese Studie zeigen, in welcher Größenordnung die Änderungen liegen müssen, um einen signifikanten Einfluss auf den Verteilungskoeffizenten zu haben. Weiterhin zeigt sich, dass der Einfluss der Schmelzzusammensetzung auf die Verteilung der Spurenelemente mit Zunahme der Polymerisierung steigt und daher nicht vernachlässigt werden darf.
In complement to the well-established zwitterionic monomers 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (“SPE”) and 3-((3-methacrylamidopropyl)dimethylammonio)propane-1-sulfonate (“SPP”), the closely related sulfobetaine monomers were synthesized and polymerized by reversible addition-fragmentation chain transfer (RAFT) polymerization, using a fluorophore labeled RAFT agent. The polyzwitterions of systematically varied molar mass were characterized with respect to their solubility in water, deuterated water, and aqueous salt solutions. These poly(sulfobetaine)s show thermoresponsive behavior in water, exhibiting upper critical solution temperatures (UCST). Phase transition temperatures depend notably on the molar mass and polymer concentration, and are much higher in D2O than in H2O. Also, the phase transition temperatures are effectively modulated by the addition of salts. The individual effects can be in parts correlated to the Hofmeister series for the anions studied. Still, they depend in a complex way on the concentration and the nature of the added electrolytes, on the one hand, and on the detailed structure of the zwitterionic side chain, on the other hand. For the polymers with the same zwitterionic side chain, it is found that methacrylamide-based poly(sulfobetaine)s exhibit higher UCST-type transition temperatures than their methacrylate analogs. The extension of the distance between polymerizable unit and zwitterionic groups from 2 to 3 methylene units decreases the UCST-type transition temperatures. Poly(sulfobetaine)s derived from aliphatic esters show higher UCST-type transition temperatures than their analogs featuring cyclic ammonium cations. The UCST-type transition temperatures increase markedly with spacer length separating the cationic and anionic moieties from 3 to 4 methylene units. Thus, apparently small variations of their chemical structure strongly affect the phase behavior of the polyzwitterions in specific aqueous environments.
Water-soluble block copolymers were prepared from the zwitterionic monomers and the non-ionic monomer N-isopropylmethacrylamide (“NIPMAM”) by the RAFT polymerization. Such block copolymers with two hydrophilic blocks exhibit twofold thermoresponsive behavior in water. The poly(sulfobetaine) block shows an UCST, whereas the poly(NIPMAM) block exhibits a lower critical solution temperature (LCST). This constellation induces a structure inversion of the solvophobic aggregate, called “schizophrenic micelle”. Depending on the relative positions of the two different phase transitions, the block copolymer passes through a molecularly dissolved or an insoluble intermediate regime, which can be modulated by the polymer concentration or by the addition of salt. Whereas, at low temperature, the poly(sulfobetaine) block forms polar aggregates that are kept in solution by the poly(NIPMAM) block, at high temperature, the poly(NIPMAM) block forms hydrophobic aggregates that are kept in solution by the poly(sulfobetaine) block. Thus, aggregates can be prepared in water, which switch reversibly their “inside” to the “outside”, and vice versa.
In this thesis we use integral-field spectroscopy to detect and understand of Lyman α (Lyα) emission from high-redshift galaxies.
Intrinsically the Lyα emission at λ = 1216 Å is the strongest recombination line from galaxies. It arises from the 2p → 1s transition in hydrogen. In star-forming galaxies the line is powered by ionisation of the interstellar gas by hot O- and B- stars. Galaxies with star-formation rates of 1 - 10 Msol/year are expected to have Lyα luminosities of 42 dex - 43 dex (erg/s), corresponding to fluxes ~ -17 dex - -18 dex (erg/s/cm²) at redshifts z~3, where Lyα is easily accessible with ground-based telescopes. However, star-forming galaxies do not show these expected Lyα fluxes. Primarily this is a consequence of the high-absorption cross-section of neutral hydrogen for Lyα photons σ ~ -14 dex (cm²). Therefore, in typical interstellar environments Lyα photons have to undergo a complex radiative transfer. The exact conditions under which Lyα photons can escape a galaxy are poorly understood.
Here we present results from three observational projects. In Chapter 2, we show integral field spectroscopic observations of 14 nearby star-forming galaxies in Balmer α radiation (Hα, λ = 6562.8 Å). These observations were obtained with the Potsdam Multi Aperture Spectrophotometer at the Calar-Alto 3.5m Telescope}. Hα directly traces the intrinsic Lyα radiation field. We present Hα velocity fields and velocity dispersion maps spatially registered onto Hubble Space Telescope Lyα and Hα images. From our observations, we conjecture a causal connection between spatially resolved Hα kinematics and Lyα photometry for individual galaxies. Statistically, we find that dispersion-dominated galaxies are more likely to emit Lyα photons than galaxies where ordered gas-motions dominate. This result indicates that turbulence in actively star-forming systems favours an escape of Lyα radiation.
Not only massive stars can power Lyα radiation, but also non-thermal emission from an accreting super-massive black hole in the galaxy centre. If a galaxy harbours such an active galactic nucleus, the rate of hydrogen-ionising photons can be more than 1000 times higher than that of a typical star-forming galaxy. This radiation can potentially ionise large regions well outside the main stellar body of galaxies. Therefore, it is expected that the neutral hydrogen from these circum-galactic regions shines fluorescently in Lyα. Circum-galactic gas plays a crucial role in galaxy formation. It may act as a reservoir for fuelling star formation, and it is also subject to feedback processes that expel galactic material. If Lyα emission from this circum-galactic medium (CGM) was detected, these important processes could be studied in-situ around high-z galaxies. In Chapter 3, we show observations of five radio-quiet quasars with PMAS to search for possible extended CGM emission in the Lyα line. However, in four of the five objects, we find no significant traces of this emission. In the fifth object, there is evidence for a weak and spatially quite compact Lyα excess at several kpc outside the nucleus. The faintness of these structures is consistent with the idea that radio-quiet quasars typically reside in dark matter haloes of modest masses. While we were not able to detect Lyα CGM emission, our upper limits provide constraints for the new generation of IFS instruments at 8--10m class telescopes.
The Multi Unit Spectroscopic Explorer (MUSE) at ESOs Very Large Telescopeis such an unique instrument. One of the main motivating drivers in its construction was the use as a survey instrument for Lyα emitting galaxies at high-z. Currently, we are conducting such a survey that will cover a total area of ~100 square arcminutes with 1 hour exposures for each 1 square arcminute MUSE pointing. As a first result from this survey we present in Chapter 5 a catalogue of 831 emission-line selected galaxies from a 22.2 square arcminute region in the Chandra Deep Field South. In order to construct the catalogue, we developed and implemented a novel source detection algorithm -- LSDCat -- based on matched filtering for line emission in 3D spectroscopic datasets (Chapter 4). Our catalogue contains 237 Lyα emitting galaxies in the redshift range 3 ≲ z ≲ 6. Only four of those previously had spectroscopic redshifts in the literature. We conclude this thesis with an outlook on the construction of a Lyα luminosity function based on this unique sample (Chapter 6).
In order to evade detection by network-traffic analysis, a growing proportion of malware uses the encrypted HTTPS protocol. We explore the problem of detecting malware on client computers based on HTTPS traffic analysis. In this setting, malware has to be detected based on the host IP address, ports, timestamp, and data volume information of TCP/IP packets that are sent and received by all the applications on the client. We develop a scalable protocol that allows us to collect network flows of known malicious and benign applications as training data and derive a malware-detection method based on a neural networks and sequence classification. We study the method's ability to detect known and new, unknown malware in a large-scale empirical study.
Meter and syntax have overlapping elements in music and speech domains, and individual differences have been documented in both meter perception and syntactic comprehension paradigms. Previous evidence insinuated but never fully explored the relationship that metrical structure has to syntactic comprehension, the comparability of these processes across music and language domains, and the respective role of individual differences. This dissertation aimed to investigate neurocognitive entrainment to meter in music and language, the impact that neurocognitive entrainment had on syntactic comprehension, and whether individual differences in musical expertise, temporal perception and working memory played a role during these processes.
A theoretical framework was developed, which linked neural entrainment, cognitive entrainment, and syntactic comprehension while detailing previously documented effects of individual differences on meter perception and syntactic comprehension. The framework was developed in both music and language domains and was tested using behavioral and EEG methods across three studies (seven experiments). In order to satisfy empirical evaluation of neurocognitive entrainment and syntactic aspects of the framework, original melodies and sentences were composed. Each item had four permutations: regular and irregular metricality, based on the hierarchical organization of strong and weak notes and syllables, and preferred and non-preferred syntax, based on structurally alternate endings. The framework predicted — for both music and language domains — greater neurocognitive entrainment in regular compared to irregular metricality conditions, and accordingly, better syntactic integration in regular compared to irregular metricality conditions. Individual differences among participants were expected for both entrainment and syntactic processes.
Altogether, the dissertation was able to support a holistic account of neurocognitive entrainment to musical meter and its subsequent influence on syntactic integration of melodies, with musician participants. The theoretical predictions were not upheld in the language domain with musician participants, but initial behavioral evidence in combination with previous EEG evidence suggest that perhaps non-musician language EEG data would support the framework’s predictions. Musicians’ deviation from hypothesized results in the language domain were suspected to reflect heightened perception of acoustic features stemming from musical training, which caused current ‘overly’ regular stimuli to distract the cognitive system. The individual-differences approach was vindicated by the surfacing of two factors scores, Verbal Working Memory and Time and Pitch Discrimination, which in turn correlated with multiple experimental data across the three studies.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
In the interest of producing functional catalysts from sustainable building-blocks, 1, 3-dicarboxylate imidazolium salts derived from amino acids were successfully modified to be suitable as N-Heterocyclic carbene (NHC) ligands within metal complexes. Complexes of Ag(I), Pd(II), and Ir(I) were successfully produced using known procedures using ligands derived from glycine, alanine, β-alanine and phenylalanine. The complexes were characterized in solid state using X-Ray crystallography, which allowed for the steric and electronic comparison of these ligands to well-known NHC ligands within analogous metal complexes.
The palladium complexes were tested as catalysts for aqueous-phase Suzuki-Miyaura cross-coupling. Water-solubility could be induced via ester hydrolysis of the N-bound groups in the presence of base. The mono-NHC–Pd complexes were seen to be highly active in the coupling of aryl bromides with phenylboronic acid; the active catalyst of which was determined to be mostly Pd(0) nanoparticles. Kinetic studies determined that reaction proceeds quickly in the coupling of bromoacetophenone, for both pre-hydrolyzed and in-situ hydrolysis catalyst dissolution. The catalyst could also be recycled for an extra run by simply re-using the aqueous layer.
The imidazolium salts were also used to produce organosilica hybrid materials. This was attempted via two methods: by post-grafting onto a commercial organosilica, and co-condensation of the corresponding organosilane. The co-condensation technique harbours potential for the production of solid-support catalysts.
Since 1998, elite athletes’ sport injuries have been monitored in single sport event, which leads to the development of first comprehensive injury surveillance system in multi-sport Olympic Games in 2008. However, injury and illness occurred in training phases have not been systematically studied due to its multi-facets, potentially interactive risk related factors. The present thesis aim to address issues of feasibility of establishing a validated measure for injury/illness, training environment and psychosocial risk factors by creating the evaluation tool namely risk of injury questionnaire (Risk-IQ) for elite athletes, which based on IOC consensus statement 2009 recommended content of preparticipation evaluation(PPE) and periodic health exam (PHE).
A total of 335 top level athletes and a total of 88 medical care providers from Germany and Taiwan participated in tow “cross-sectional plus longitudinal” Risk-IQ and MCPQ surveys respectively. Four categories of injury/illness related risk factors questions were asked in Risk-IQ for athletes while injury risk and psychological related questions were asked in MCPQ for MCP cohorts. Answers were quantified scales wise/subscales wise before analyzed with other factors/scales. In addition, adapted variables such as sport format were introduced for difference task of analysis.
Validated with 2-wyas translation and test-retest reliabilities, the Risk-IQ was proved to be in good standard which were further confirmed by analyzed results from official surveys in both Germany and Taiwan. The result of Risk-IQ revealed that elite athletes’ accumulated total injuries, in general, were multi-factor dependent; influencing factors including but not limited to background experiences, medical history, PHE and PPE medical resources as well as stress from life events. Injuries of different body parts were sport format and location specific. Additionally, medical support of PPE and PHE indicated significant difference between German and Taiwan.
The result of the present thesis confirmed that it is feasible to construct a comprehensive evalua-tion instrument for heterogeneous elite athletes cohorts’ risk factor analysis for injury/illness oc-curred during their non-competition periods. In average and with many moderators involved, Ger-man elite athletes have superior medical care support yet suffered more severe injuries than Tai-wanese counterparts. Opinions of injury related psychological issues reflected differently on vari-ous MCP groups irrespective of different nationalities. In general, influencing factors and interac-tions existed among relevant factors in both studies which implied further investigation with multiple regression analysis is needed for better understanding.
Ionothermal carbon materials
(2016)
Alternative concepts for energy storage and conversion have to be developed, optimized and employed to fulfill the dream of a fossil-independent energy economy. Porous carbon materials play a major role in many energy-related devices. Among different characteristics, distinct porosity features, e.g., specific surface area (SSA), total pore volume (TPV), and the pore size distribution (PSD), are important to maximize the performance in the final device. In order to approach the aim to synthesize carbon materials with tailor-made porosity in a sustainable fashion, the present thesis focused on biomass-derived precursors employing and developing the ionothermal carbonization.
During the ionothermal carbonization, a salt melt simultaneously serves as solvent and porogen. Typically, eutectic mixtures containing zinc chloride are employed as salt phase. The first topic of the present thesis addressed the possibility to precisely tailor the porosity of ionothermal carbon materials by an experimentally simple variation of the molar composition of the binary salt mixture. The developed pore tuning tool allowed the synthesis of glucose derived carbon materials with predictable SSAs in the range of ~ 900 to ~ 2100 m2 g-1. Moreover, the nucleobase adenine was employed as precursor introducing nitrogen functionalities in the final material. Thereby, the chemical properties of the carbon materials are varied leading to new application fields. Nitrogen doped carbons (NDCs) are able to catalyze the oxygen reduction reaction (ORR) which takes place on the cathodic site of a fuel cell. The herein developed porosity tailoring allowed the synthesis of adenine derived NDCs with outstanding SSAs of up to 2900 m2 g-1 and very large TPV of 5.19 cm3 g-1. Furthermore, the influence of the porosity on the ORR could be directly investigated enabling the precise optimization of the porosity characteristics of NDCs for this application. The second topic addressed the development of a new method to investigate the not-yet unraveled mechanism of the oxygen reduction reaction using a rotating disc electrode setup. The focus was put on noble-metal free catalysts. The results showed that the reaction pathway of the investigated catalysts is pH-dependent indicating different active species at different pH-values. The third topic addressed the expansion of the used salts for the ionothermal approach towards hydrated calcium and magnesium chloride. It was shown that hydrated salt phases allowed the introduction of a secondary templating effect which was connected to the coexistence of liquid and solid salt phases. The method enabled the synthesis of fibrous NDCs with SSAs of up to 2780 m2 g-1 and very large TPV of 3.86 cm3 g-1. Moreover, the concept of active site implementation by a facile low-temperature metalation employing the obtained NDCs as solid ligands could be shown for the first time in the context of ORR.
Overall, the thesis may pave the way towards highly porous carbon with tailor-made porosity materials prepared by an inexpensive and sustainable pathway, which can be applied in energy related field thereby supporting the needed expansion of the renewable energy sector.
Can the statistical properties of single-electron transfer events be correctly predicted within a common equilibrium ensemble description? This fundamental in nanoworld question of ergodic behavior is scrutinized within a very basic semi-classical curve-crossing problem. It is shown that in the limit of non-adiabatic electron transfer (weak tunneling) well-described by the Marcus–Levich–Dogonadze(MLD) rate the answer is yes. However, in the limit of the so-called solvent-controlled adiabatic electron transfer, a profound breaking of ergodicity occurs. Namely, a common description based on the ensemble reduced density matrix with an initial equilibrium distribution of the reaction coordinate is not able to reproduce the statistics of single-trajectory events in this seemingly classical regime. For sufficiently large activation barriers, the ensemble survival probability in a state remains nearly exponential with the inverse rate given by the sum of the adiabatic curve crossing (Kramers) time and the inverse MLD rate. In contrast, near to the adiabatic regime, the single-electron survival probability is clearly non-exponential, even though it possesses an exponential tail which agrees well with the ensemble description. Initially, it is well described by a Mittag-Leffler distribution with a fractional rate. Paradoxically, the mean transfer time in this classical on the ensemble level regime is well described by the inverse of the nonadiabatic quantum tunneling rate on a single particle level. An analytical theory is developed which perfectly agrees with stochastic simulations and explains our findings.
A series of new sulfobetaine methacrylates, including nitrogen-containing saturated heterocycles, was synthesised by systematically varying the substituents of the zwitterionic group. Radical polymerisation via the RAFT (reversible addition–fragmentation chain transfer) method in trifluoroethanol proceeded smoothly and was well controlled, yielding polymers with predictable molar masses. Molar mass analysis and control of the end-group fidelity were facilitated by end-group labeling with a fluorescent dye. The polymers showed distinct thermo-responsive behaviour of the UCST (upper critical solution temperature) type in an aqueous solution, which could not be simply correlated to their molecular structure via an incremental analysis of the hydrophilic and hydrophobic elements incorporated within them. Increasing the spacer length separating the ammonium and the sulfonate groups of the zwitterion moiety from three to four carbons increased the phase transition temperatures markedly, whereas increasing the length of the spacer separating the ammonium group and the carboxylate ester group on the backbone from two to three carbons provoked the opposite effect. Moreover, the phase transition temperatures of the analogous polyzwitterions decreased in the order dimethylammonio > morpholinio > piperidinio alkanesulfonates. In addition to the basic effect of the polymers’ precise molecular structure, the concentration and the molar mass dependence of the phase transition temperatures were studied. Furthermore, we investigated the influence of added low molar mass salts on the aqueous-phase behaviour for sodium chloride and sodium bromide as well as sodium and ammonium sulfate. The strong effects evolved in a complex way with the salt concentration. The strength of these effects depended on the nature of the anion added, increasing in the order sulfate < chloride < bromide, thus following the empirical Hofmeister series. In contrast, no significant differences were observed when changing the cation, i.e. when adding sodium or ammonium sulfate.
Die Dissertation befasst sich mit der Organisation von humanitären Lufttransporten bei internationalen Katastrophen. Diese Flüge finden immer dann statt, wenn die eigene Hilfeleistungsfähigkeit der von Katastrophen betroffenen Regionen überfordert ist und Hilfe aus dem Ausland angefordert wird. Bei jedem der darauffolgenden Hilfseinsätze stehen Hilfsorganisationen und weitere mit der Katastrophenhilfe beteiligte Akteure erneut vor der Herausforderung, in kürzester Zeit eine logistische Kette aufzubauen, damit die Güter zum richtigen Zeitpunkt in der richtigen Menge am richtigen Ort eintreffen.
Humanitäre Lufttransporte werden in der Regel als Charterflüge organisiert und finden auf langen Strecken zu Zielen statt, die nicht selten abseits der hochfrequentierten Warenströme liegen. Am Markt ist das Angebot für derartige Transportdienstleistungen nicht gesichert verfügbar und unter Umständen müssen Hilfsorganisationen warten bis Kapazitäten mit geeigneten Flugzeugen zur Verfügung stehen. Auch qualitativ sind die Anforderungen von Hilfsorganisationen an die Hilfsgütertransporte höher als im regulären Linientransport.
Im Rahmen der Dissertation wird ein alternatives Organisationsmodell für die Beschaffung und den Betrieb sowie die Finanzierung von humanitären Lufttransporten aufgebaut. Dabei wird die gesicherte Verfügbarkeit von besonders flexibel einsetzbaren Flugzeugen in Betracht gezogen, mit deren Hilfe die Qualität und insbesondere die Planbarkeit der Hilfeleistung verbessert werden könnte.
Ein idealtypisches Modell wird hier durch die Kopplung der Kollektivgütertheorie, die der Finanzwissenschaft zuzuordnen ist, mit der Vertragstheorie als Bestandteil der Neuen Institutionenökonomik erarbeitet.
Empirische Beiträge zur Vertragstheorie bemängeln, dass es bei der Beschaffung von transaktionsspezifischen Investitionsgütern, wie etwa Flugzeugen mit besonderen Eigenschaften, aufgrund von Risiken und Umweltunsicherheiten zu ineffizienten Lösungen zwischen Vertragspartnern kommt. Die vorliegende Dissertation zeigt eine Möglichkeit auf, wie durch Aufbau einer gemeinsamen Informationsbasis ex-ante, also vor Vertragsschluss, Risiken und Umweltunsicherheiten reduziert werden können. Dies geschieht durch eine temporale Erweiterung eines empirischen Modells zur Bestimmung der Organisationsform bei transaktionsspezifischen Investitionsgütern aus der Regulierungsökonomik.
Die Arbeitet leistet darüber hinaus einen Beitrag zur Steigerung der Effizienz in der humanitären Logistik durch die fallspezifische Betrachtung von horizontalen Kooperationen und Professionalisierung der Hilfeleistung im Bereich der humanitären Luftfahrt.
It is commonly recognized that soil moisture exhibits spatial heterogeneities occurring in a wide range of scales. These heterogeneities are caused by different factors ranging from soil structure at the plot scale to land use at the landscape scale. There is an urgent need for effi-cient approaches to deal with soil moisture heterogeneity at large scales, where manage-ment decisions are usually made. The aim of this dissertation was to test innovative ap-proaches for making efficient use of standard soil hydrological data in order to assess seep-age rates and main controls on observed hydrological behavior, including the role of soil het-erogeneities.
As a first step, the applicability of a simplified Buckingham-Darcy method to estimate deep seepage fluxes from point information of soil moisture dynamics was assessed. This was done in a numerical experiment considering a broad range of soil textures and textural het-erogeneities. The method performed well for most soil texture classes. However, in pure sand where seepage fluxes were dominated by heterogeneous flow fields it turned out to be not applicable, because it simply neglects the effect of water flow heterogeneity. In this study a need for new efficient approaches to handle heterogeneities in one-dimensional water flux models was identified.
As a further step, an approach to turn the problem of soil moisture heterogeneity into a solu-tion was presented: Principal component analysis was applied to make use of the variability among soil moisture time series for analyzing apparently complex soil hydrological systems. It can be used for identifying the main controls on the hydrological behavior, quantifying their relevance, and describing their particular effects by functional averaged time series. The ap-proach was firstly tested with soil moisture time series simulated for different texture classes in homogeneous and heterogeneous model domains. Afterwards, it was applied to 57 mois-ture time series measured in a multifactorial long term field experiment in Northeast Germa-ny.
The dimensionality of both data sets was rather low, because more than 85 % of the total moisture variance could already be explained by the hydrological input signal and by signal transformation with soil depth. The perspective of signal transformation, i.e. analyzing how hydrological input signals (e.g., rainfall, snow melt) propagate through the vadose zone, turned out to be a valuable supplement to the common mass flux considerations. Neither different textures nor spatial heterogeneities affected the general kind of signal transfor-mation showing that complex spatial structures do not necessarily evoke a complex hydro-logical behavior. In case of the field measured data another 3.6% of the total variance was unambiguously explained by different cropping systems. Additionally, it was shown that dif-ferent soil tillage practices did not affect the soil moisture dynamics at all.
The presented approach does not require a priori assumptions about the nature of physical processes, and it is not restricted to specific scales. Thus, it opens various possibilities to in-corporate the key information from monitoring data sets into the modeling exercise and thereby reduce model uncertainties.
We introduce azobenzene-functionalized polyelectrolyte multilayers as efficient, inexpensive optoacoustic transducers for hyper-sound strain waves in the GHz range. By picosecond transient reflectivity measurements we study the creation of nanoscale strain waves, their reflection from interfaces, damping by scattering from nanoparticles and propagation in soft and hard adjacent materials like polymer layers, quartz and mica. The amplitude of the generated strain ε ∼ 5 × 10−4 is calibrated by ultrafast X-ray diffraction.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
www.BrAnD2. Wille.
(2016)
2014 fand der Potsdamer Lateintag zum 10. Mal statt. Das Jubiläum war ein angemessener Anlass, unser neues Projekt vorzustellen. Die Robert Bosch-Stiftung fördert wieder für drei Jahre die Zusammenarbeit der Klassischen Philologie der Universität Potsdam mit Schulen aus Brandenburg. Der Titel lautet: www.BrAnD2. Wille. Würde. Wissen. Zweites Brandenburger Antike-Denkwerk. Zur Auftaktveranstaltung zum Thema „Wille“ erschienen wieder über 500 Teilnehmerinnen und Teilnehmer. Der Band versammelt einen Projektbericht, die Vorträge von Frau Prof. Dr. Christiane Kunst und Herrn Prof. Dr. Christoph Horn sowie eine Auswahl an Materialen der betreuenden Studierenden.
Software-Fehlerinjektion
(2016)
Fehlerinjektion ist ein essentielles Werkzeug, um die Fehlertoleranz komplexer Softwaresysteme experimentell zu evaluieren.
Wir berichten über das Seminar zum Thema Software-Fehlerinjektion, das am Fachgebiet für Betriebssysteme und Middleware am Hasso-Plattner-Institut der Universität Potsdam im Sommersemester 2015 stattfand.
In dem Seminar ging es darum, verschiedene Fehlerinjektionsansätze und -werkzeuge anzuwenden und hinsichtlich ihrer Anwendbarkeit in verschiedenen Szenarien zu bewerten.
In diesem Bericht werden die studierten Ansätze vorgestellt und verglichen.
Widespread landscape changes are presently observed in the Arctic and are most likely to
accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past
environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term
environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at
reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia,
the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the
following research questions:
1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors?
2. What are the dominant processes during thermokarst lake development and how are
they reflected in proxy records?
3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle?
Methods applied in this study are based upon a multi-proxy approach combining
sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as
analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations.
The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward
Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to
Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland
development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a
BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth.
A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a
full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt%) than in Mama Rhonda deposits (mean TOC of 7.9 wt%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system.
Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment.
The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments.
Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.
The ever-increasing fat content in Western diet, combined with decreased levels of physical activity, greatly enhance the incidence of metabolic-related diseases. Cancer cachexia (CC) and Metabolic syndrome (MetS) are both multifactorial highly complex metabolism related syndromes, whose etiology is not fully understood, as the mechanisms underlying their development are not completely unveiled. Nevertheless, despite being considered “opposite sides”, MetS and CC share several common issues such as insulin resistance and low-grade inflammation. In these scenarios, tissue macrophages act as key players, due to their capacity to produce and release inflammatory mediators. One of the main features of MetS is hyperinsulinemia, which is generally associated with an attempt of the β-cell to compensate for diminished insulin sensitivity (insulin resistance). There is growing evidence that hyperinsulinemia per se may contribute to the development of insulin resistance, through the establishment of low grade inflammation in insulin responsive tissues, especially in the liver (as insulin is secreted by the pancreas into the portal circulation). The hypothesis of the present study was that insulin may itself provoke an inflammatory response culminating in diminished hepatic insulin sensitivity. To address this premise, firstly, human cell line U937 differentiated macrophages were exposed to insulin, LPS and PGE2. In these cells, insulin significantly augmented the gene expression of the pro-inflammatory mediators IL-1β, IL-8, CCL2, Oncostatin M (OSM) and microsomal prostaglandin E2 synthase (mPGES1), and of the anti-inflammatory mediator IL-10. Moreover, the synergism between insulin and LPS enhanced the induction provoked by LPS in IL-1β, IL-8, IL-6, CCL2 and TNF-α gene. When combined with PGE2, insulin enhanced the induction provoked by PGE2 in IL-1β, mPGES1 and COX2, and attenuated the inhibition induced by PGE2 in CCL2 and TNF-α gene expression contributing to an enhanced inflammatory response by both mechanisms. Supernatants of insulin-treated U937 macrophages reduced the insulin-dependent induction of glucokinase in hepatocytes by 50%. Cytokines contained in the supernatant of insulin-treated U937 macrophages also activated hepatocytes ERK1/2, resulting in inhibitory serine phosphorylation of the insulin receptor substrate. Additionally, the transcription factor STAT3 was activated by phosphorylation resulting in the induction of SOCS3, which is capable of interrupting the insulin receptor signal chain. MicroRNAs, non-coding RNAs linked to protein expression regulation, nowadays recognized as active players in the generation of several inflammatory disorders such as cancer and type II diabetes are also of interest. Considering that in cancer cachexia, patients are highly affected by insulin resistance and inflammation, control, non-cachectic and cachectic cancer patients were selected and the respective circulating levels of pro-inflammatory mediators and microRNA-21-5p, a posttranscriptional regulator of STAT3 expression, assessed and correlated. Cachectic patients circulating cytokines IL-6 and IL-8 levels were significantly higher than those of non-cachectic and controls, and the expression of microRNA-21-5p was significantly lower. Additionally, microRNA-21-5p reduced expression correlated negatively with IL-6 plasma levels. These results indicate that hyperinsulinemia per se might contribute to the low grade inflammation prevailing in MetS patients and thereby promote the development
of insulin resistance particularly in the liver. Diminished MicroRNA-21-5p expression may enhance inflammation and STAT3 expression in cachectic patients, contributing to the development of insulin resistance.
BACKGROUND: The etiology of low back pain (LBP), one of the most prevalent and costly diseases of our time, is accepted to be multi-causal, placing functional factors in the focus of research. Thereby, pain models suggest a centrally controlled strategy of trunk stiffening in LBP. However, supporting biomechanical evidence is mostly limited to static measurements during maximum voluntary contractions (MVC), probably influenced by psychological factors in LBP. Alternatively, repeated findings indicate that the neuromuscular efficiency (NME), characterized by the strength-to-activation relationship (SAR), of lower back muscles is impaired in LBP. Therefore, a dynamic SAR protocol, consisting of normalized trunk muscle activation recordings during submaximal loads (SMVC) seems to be relevant. This thesis aimed to investigate the influence of LBP on the NME and activation pattern of trunk muscles during dynamic trunk extensions.
METHODS: The SAR protocol consisted of an initial MVC reference trial (MVC1), followed by SMVCs at 20, 40, 60 and 80% of MVC1 load. An isokinetic trunk dynamometer (Con-Trex TP, ROM: 45° flexion to 10° extension, velocity: 45°/s) and a trunk surface EMG setup (myon, up to 12 leads) was used. Extension torque output [Nm] and muscular activation [V] were assessed in all trials. Finally, another MVC trial was performed (MVC2) for reliability analysis. For SAR evaluation the SMVC trial values were normalized [%MVC1] and compared inter- and intra-individually.
The methodical validity of the approach was tested in an isometric SAR single-case pilot study (S1a: N = 2, female LBP patient vs. healthy male). In addition, the validity of the MVC reference method was verified by comparing different contraction modes (S1b: N = 17, healthy individuals). Next, the isokinetic protocol was validated in terms of content for its applicability to display known physiological differences between sexes in a cross-sectional study (S2: each n = 25 healthy males/females). Finally, the influence of acute pain on NME was investigated longitudinally by comparing N = 8 acute LBP patients with the retest after remission of pain (S3). The SAR analysis focused on normalized agonistic extensor activation and abdominal and synergistic extensor co-activation (t-tests, ANOVA, α = .05) as well as on reliability of MVC1/2 outcomes.
RESULTS: During the methodological validation of the protocol (S1a), the isometric SAR was found to be descriptively different between individuals. Whereas torque output was highest during eccentric MVC, no relevant difference in peak EMG activation was found between contraction modes (S1b). The isokinetic SAR sex comparison (S2), though showing no significant overall effects, revealed higher normalized extensor activation at moderate submaximal loads in females (13 ± 4%), primarily caused by pronounced thoracic activation. Similarly, co-activation analysis resulted in significantly higher antagonistic activation at moderate loads compared to males (33 ± 9%). During intra-individual analysis of SAR in LBP patients (S3), a significant effect of pain status on the SAR has been identified, manifesting as increased normalized EMG activation of extensors during acute LBP (11 ± 8%) particularly at high load. Abdominal co-activation tended to be elevated (27 ± 11%) just as the thoracic extensor parts seemed to take over proportions of lumbar activation. All together, the M. erector spinae behaviour during the SAR protocol was rather linear with the tendency to rise exponentially during high loads. For the level of normalized EMG activation during SMVCs, a clear increasing trend from healthy males to females over to non-acute and acute LBP patients was discovered. This was associated by elevated antagonistic activation and a shift of synergistic towards lumbar extensor activation. The MVC data revealed overall good reliability, with clearly higher variability during acute LBP.
DISCUSSION: The present thesis demonstrates that the NME of lower back muscles is impaired in LBP patients, especially during an acute pain episode. A new dynamic protocol has been developed that makes it possible to display the underlying SAR using normalized trunk muscle EMG during submaximal isokinetic loads. The protocol shows promise as a biomechanical tool for diagnostic analysis of NME in LBP patients and monitoring of rehabilitation progress. Furthermore, reliability not of maximum strength but rather of peak EMG of MVC measurements seems to be decreased in LBP patients. Meanwhile, the findings of this thesis largely substantiate the assumptions made by the recently presented ‘motor adaptation to pain’ model, suggesting a pain-related intra- and intermuscular activation redistribution affecting movement and stiffness of the trunk. Further research is needed to distinguish the grade of NME impairment between LBP subgroups.
Der deutsche Zeitungsmarkt ist von einem breiten Angebot überregionaler Tageszeitungen gekennzeichnet, denen innerhalb der Bevölkerung unterschiedliche politische Ausrichtungen zugeschrieben werden. Als konservatives Blatt gilt die „Frankfurter Allgemeine Zeitung“ (F.A.Z.), wohingegen sich die „taz.die tageszeitung“ (taz) durch eine linksalternative Orientierung auszeichnet. Ausgehend von diesem Unterschied untersucht die Arbeit die sprachliche Gestaltung der Überschriften, Unter- und Zwischenzeilen der F.A.Z. und der taz zu den Themen „Alternative für Deutschland“ (AfD), „Nationalsozialistischer Untergrund“ (NSU) und „Front National“ (FN). Die qualitativ-quantitative Korpusuntersuchung fokussiert neben lexikalischen und syntaktischen auch sprachstilistische Faktoren, die eine Stellungnahme zu der Forschungsthese, dass mit der sprachlichen Formulierung der Haupt-, Unter- und Zwischenzeilen die politische Ausrichtung und ideologische Grundhaltung der Zeitungen deutlich wird, erlauben. Die Grundlage für die Analysen bildet ein konstruktivistischer Ansatz, der auf systemtheoretischen Annahmen beruht. Dadurch kann zum einen gezeigt werden, wie sich die Ergebnisse der sprachlichen Analysen mit den unterschiedlichen zugrunde liegenden Wirklichkeitskonstruktionen der Zeitungen verbinden lassen, zum anderen wird deutlich, dass sich die Formulierung der Überschriften auch auf die individuelle Realitätskonstruktion der Rezipienten auswirkt. Die vergleichenden Auswertungen geben unterschiedlich gewichtete Hinweise auf die Einstellung der Kommunikatoren und bestätigen, dass die jeweiligen Perspektivierungen der Wirklichkeit sowie ideologischen Grundhaltungen der F.A.Z. und der taz bereits in der sprachlichen Gestaltung ihrer Titelkomplexe deutlich werden.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
Seit Mitte der 2000er Jahre richtet sich angesichts der starken Schülerzahlenrückgänge die wissenschaftliche und bildungspolitische Aufmerksamkeit wieder stärker auf Fragen der Gestaltung beruflicher Bildung in ländlich-peripheren Räumen. Einerseits knüpft sich an die demografische Entwicklung die Erwartung einer Entspannung der lange Zeit sehr prekären Ausbildungsplatzsituation in diesen Räumen. Andererseits ist offen, inwiefern mit den Anpassungsprozessen die Ausbildung neuer räumlichen Disparitäten verbunden ist, etwa durch die Schließung von Berufsschulen. Die Arbeit setzt sich mit der aktuellen Situation und dem Umgang mit Berufsschulen unter den folgenden Fragestellungen auseinander: Wie kann in Regionen mit dünner Besiedlung auf der einen und einer schwierigen regionalökonomischen Situation auf der anderen Seite diese komplexe Infrastruktur betrieben werden? Welche Steuerungsinstrumente kommen in dem anstehenden Rückbauprozess zum Tragen und welche Rolle spielen demographische Entwicklungen, strukturelle Faktoren und Akteurshandeln? Ein besonderer Fokus liegt auf der theoretischen und empirischen Verknüpfung von raumspezifischen Fragestellungen mit der Komplexität von Berufsschulen als ausdifferenzierte Institutionen zwischen Schulsystem und Wirtschaft.
Untersucht wurde die Entwicklung des brandenburgischen Berufsschulnetzes ab den 1990er Jahren mit einer vertieften Fallstudie im Landkreis Uckermark. Entgegen der Annahme eines starken Einbruchs der Infrastrukturversorgung in Folge des Schülerzahlenrückganges wird gezeigt, dass sich die brandenburgische Berufsschullandschaft seit den 2000er Jahren durch eine relative Stabilität auszeichnet. Allerdings erfolgte eine berufsspezifische Ausdünnung des Angebotes. Im Jahr 2013 fanden nur 41% aller Auszubilden in ihrem jeweiligen Ausbildungsberuf ein relativ flächendeckendes Berufsschulnetz vor. Als Faktoren für gelingende Steuerungsprozesse (in ländlich-peripheren Räumen) zeigten sich das Subsidaritätsprinzip, ein gemeinsam geteiltes Professionsverständnis sowie die Orientierung auf einen gewissen räumlichen Ausgleich. Erfolgreiche Interventionen gegen Konzentration basierten maßgeblich auf einem ausgeprägten, fachlichen „Selbstbewusstsein“ und Anspruch von Bildungsorganisationen. Demgegenüber konnten unspezifische Bezugnahmen auf Peripherisierungen keine handlungswirksamen Strategien entfalten. Teilentwicklungen im Schulberufssystem sind durch die Expansion privater Berufsschulen in einen ausgeprägten institutionellen Wandel eingebettet. Die Infrastrukturentwicklung führte in diesem Segment zur Ausbildung eines spezifischen Marktes, der teilweise nur begrenzt einem klassischen Angebots-Nachfrage-Modell folgt und potentiell zu Überausbildung führt.
Die vorgefundenen Steuerungsformen sind angesichts von Ressourcenmangel, der sektoralen Zersplitterung sowie des Mangels an Institutionen für die Ausbildung von Regionen als Handlungsräume in der Berufsausbildung ambivalent. Der demografische Diskurs führte (bisher) nicht zur Ausbildung von Steuerungsformen, welche die in der Infrastrukturgestaltung dominierenden, sektoralen Zuständigkeiten „überwinden“. Daher fungiert der Diskurs teilweise nur begrenzt als eine neue Orientierung für die Ausbildung von „peripheriespezifischen“ Infrastrukturstrategien und alternativen Steuerungsmodellen. Er kann dann neue, über klassische Anpassungsprozesse hinausgehende, Optionen generieren, wenn er sich stärker auf die Bedarfe der Akteure und Adressaten von Berufsbildung im ländlichen Raum bezieht und enger mit den Fachdiskursen verbindet.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions.
In this dissertation, we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user. By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
We start with fabricating an object in one go and investigate how to tighten the feedback-cycle on an object-level: We contribute a method called low-fidelity fabrication, which saves up to 90% fabrication time by creating objects as fast low-fidelity previews, which are sufficient to evaluate key design aspects. Depending on what is currently being tested, we propose different conversions that enable users to focus on different parts: faBrickator allows for a modular design in the early stages of prototyping; when users move on WirePrint allows quickly testing an object's shape, while Platener allows testing an object's technical function. We present an interactive editor for each technique and explain the underlying conversion algorithms.
By interacting on smaller units, such as a single element of an object, we explore what it means to transition from systems that fabricate objects in one go to turn-taking systems. We start with a 2D system called constructable: Users draw with a laser pointer onto the workpiece inside a laser cutter. The drawing is captured with an overhead camera. As soon as the the user finishes drawing an element, such as a line, the constructable system beautifies the path and cuts it--resulting in physical output after every editing step. We extend constructable towards 3D editing by developing a novel laser-cutting technique for 3D objects called LaserOrigami that works by heating up the workpiece with the defocused laser until the material becomes compliant and bends down under gravity. While constructable and LaserOrigami allow for fast physical feedback, the interaction is still best described as turn-taking since it consists of two discrete steps: users first create an input and afterwards the system provides physical output.
By decreasing the interaction unit even further to a single feature, we can achieve real-time physical feedback: Input by the user and output by the fabrication device are so tightly coupled that no visible lag exists. This allows us to explore what it means to transition from turn-taking interfaces, which only allow exploring one option at a time, to direct manipulation interfaces with real-time physical feedback, which allow users to explore the entire space of options continuously with a single interaction. We present a system called FormFab, which allows for such direct control. FormFab is based on the same principle as LaserOrigami: It uses a workpiece that when warmed up becomes compliant and can be reshaped. However, FormFab achieves the reshaping not based on gravity, but through a pneumatic system that users can control interactively. As users interact, they see the shape change in real-time.
We conclude this dissertation by extrapolating the current evolution into a future in which large numbers of people use the new technology to create objects. We see two additional challenges on the horizon: sustainability and intellectual property. We investigate sustainability by demonstrating how to print less and instead patch physical objects. We explore questions around intellectual property with a system called Scotty that transfers objects without creating duplicates, thereby preserving the designer's copyright.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and [(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives. They can easily be synthesized in a few steps from commercially available glycals. As a result of the ring strain from the four-membered rings in (I) and (II), the conformations of the carbohydrates deviate strongly from the ideal chair form. Compound (II) occurs in the boat form. In the five-membered lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair conformation. As a result of the distortion of the sugar rings, the configurations of the three bicyclic carbohydrate derivatives could not be determined from their NMR coupling constants. From our three crystal structure determinations, we were able to establish for the first time the absolute configurations of all new stereocenters of the carbohydrate rings.
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
The human immunodeficiency virus (HIV) has resisted nearly three decades of efforts targeting a cure. Sustained suppression of the virus has remained a challenge, mainly due
to the remarkable evolutionary adaptation that the virus exhibits by the accumulation of drug-resistant mutations in its genome. Current therapeutic strategies aim at achieving and maintaining a low viral burden and typically involve multiple drugs. The choice of optimal combinations of these drugs is crucial, particularly in the background of treatment failure having occurred previously with certain other drugs. An understanding of the dynamics of viral mutant genotypes aids in the assessment of treatment failure with a certain drug
combination, and exploring potential salvage treatment regimens.
Mathematical models of viral dynamics have proved invaluable in understanding the viral life cycle and the impact of antiretroviral drugs. However, such models typically use simplified and coarse-grained mutation schemes, that curbs the extent of their application to drug-specific clinical mutation data, in order to assess potential next-line therapies. Statistical
models of mutation accumulation have served well in dissecting mechanisms of resistance evolution by reconstructing mutation pathways under different drug-environments. While these models perform well in predicting treatment outcomes by statistical learning, they do not incorporate drug effect mechanistically. Additionally, due to an inherent lack of
temporal features in such models, they are less informative on aspects such as predicting mutational abundance at treatment failure. This limits their application in analyzing the
pharmacology of antiretroviral drugs, in particular, time-dependent characteristics of HIV therapy such as pharmacokinetics and pharmacodynamics, and also in understanding the impact of drug efficacy on mutation dynamics.
In this thesis, we develop an integrated model of in vivo viral dynamics incorporating drug-specific mutation schemes learned from clinical data. Our combined modelling
approach enables us to study the dynamics of different mutant genotypes and assess mutational abundance at virological failure. As an application of our model, we estimate in vivo
fitness characteristics of viral mutants under different drug environments. Our approach also extends naturally to multiple-drug therapies. Further, we demonstrate the versatility of our model by showing how it can be modified to incorporate recently elucidated mechanisms of drug action including molecules that target host factors.
Additionally, we address another important aspect in the clinical management of HIV disease, namely drug pharmacokinetics. It is clear that time-dependent changes in in vivo
drug concentration could have an impact on the antiviral effect, and also influence decisions on dosing intervals. We present a framework that provides an integrated understanding
of key characteristics of multiple-dosing regimens including drug accumulation ratios and half-lifes, and then explore the impact of drug pharmacokinetics on viral suppression.
Finally, parameter identifiability in such nonlinear models of viral dynamics is always a concern, and we investigate techniques that alleviate this issue in our setting.
Effektivität frühzeitiger Interventionen zur Prävention von Lese- und Rechtschreibschwierigkeiten
(2016)
Die vorliegende Studie beschäftigt sich mit der Förderung der Lese- und Schreibkompetenz in der Anfangsphase des Schriftspracherwerbs. Ziel der Untersuchung ist die Erprobung und Evaluierung frühzeitiger, diagnosegeleiteter Interventionen zur Prävention von Lese- und Rechtschreibschwierigkeiten. Im Unterschied zu vielen Studien in diesem Bereich werden alle Maßnahmen unter realen schulischen Bedingungen im Rahmen des Erstlese- und Schreibunterrichts durch die Klassenlehrer selbst durchgeführt, wobei diese von der Autorin unterstützt und begleitet werden. Förder- und Prozessdiagnose sowie Elemente diagnosegeleiteter Förderung werden aus Theorien und Forschungslage abgeleitet und zu einem Interventionsset verbunden. Die Effektivität der evidenzbasierten Maßnahmen wird durch Parallelgruppenvergleiche überprüft.
An der empirischen Untersuchung nahmen insgesamt 25 Schulklassen mit 560 Erstklässlern teil, geteilt in Versuchs- und Kontrollgruppe. Mit der Eingangsdiagnose am Schulbeginn wurden Voraussetzungen für den Schriftspracherwerb erhoben und mit der Evaluierungsdiagnose am Ende der ersten Schulstufe entwicklungsadäquate schriftsprachliche Kompetenzen auf der Wortebene überprüft. Zusätzlich erfasst wurden internale und externale Einflussfaktoren, deren Wirkung in der statistischen Auswertung berücksichtigt wurde. Alle Datenerhebungen wurden in Versuchs- und Kontrollgruppe durchgeführt, während die evidenzbasierten Treatments nur in der Versuchsgruppe stattfanden.
Die Auswertung bestätigt mit signifikanten Ergebnissen den engen Zusammenhang zwischen der Phonologischen Bewusstheit zu Beginn des Schriftspracherwerbs und der Lese- und Rechtschreibfähigkeit am Ende der ersten Schulstufe sowie zwischen Familiärer Literalität und Lesefertigkeit. Schriftsprachliche Vorkenntnisse weisen eine Tendenz zur Signifikanz hinsichtlich ihrer positiven Wirkung auf die Basale Lesefertigkeit auf. Eine höchst signifikante positive Wirkung auf die Basale Lesefertigkeit zeigt die Druckschrift als Ausgangsschrift.
Die Ergebnisse deuten auf eine Überlegenheit vorschulischer präliteraler Fertigkeiten hinsichtlich ihrer Wirkung auf die Lese- und Rechtschreibfertigkeit am Ende der ersten Schulstufe gegenüber Fördermaßnahmen unter realen schulischen Bedingungen hin. Die positive Wirkung einer unverbundenen Ausgangsschrift auf den Leseerwerb betont die Wichtigkeit der Wahl der Ausgangsschrift. Im frühen Schriftspracherwerb sollte die Druckschrift für das Lesen und Schreiben verwendet werden.
Widespread flooding in June 2013 caused damage costs of €6 to 8 billion in Germany, and awoke many memories of the floods in August 2002, which resulted in total damage of €11.6 billion and hence was the most expensive natural hazard event in Germany up to now. The event of 2002 does, however, also mark a reorientation toward an integrated flood risk management system in Germany. Therefore, the flood of 2013 offered the opportunity to review how the measures that politics, administration, and civil society have implemented since 2002 helped to cope with the flood and what still needs to be done to achieve effective and more integrated flood risk management. The review highlights considerable improvements on many levels, in particular (1) an increased consideration of flood hazards in spatial planning and urban development, (2) comprehensive property-level mitigation and preparedness measures, (3) more effective flood warnings and improved coordination of disaster response, and (4) a more targeted maintenance of flood defense systems. In 2013, this led to more effective flood management and to a reduction of damage. Nevertheless, important aspects remain unclear and need to be clarified. This particularly holds for balanced and coordinated strategies for reducing and overcoming the impacts of flooding in large catchments, cross-border and interdisciplinary cooperation, the role of the general public in the different phases of flood risk management, as well as a transparent risk transfer system. Recurring flood events reveal that flood risk management is a continuous task. Hence, risk drivers, such as climate change, land-use changes, economic developments, or demographic change and the resultant risks must be investigated at regular intervals, and risk reduction strategies and processes must be reassessed as well as adapted and implemented in a dialogue with all stakeholders.
Tarkovsky’s legacy
(2016)
„Vajanie iz vremeni“
(2016)
Die Museumsbesucher
(2016)
Блуждающие цитаты
(2016)
Erneuertes Gestern?
(2016)
Tarkovskijs Scham
(2016)
Der Jüdische Friedhof in Potsdam ist der einzige authentische Gedächtnisort, der vom Lebenszyklus der jüdischen Bevölkerung in der ehemaligen preußischen Residenz- und Garnisonstadt zeugt. Er ist zudem Ausdruck des unterschiedlichen Umgangs der Nachgeborenen mit ihrem Kulturgut. Außerdem ist dieser Jüdische Friedhof zurzeit als einziger in Deutschland durch die UNESCO als Welterbe anerkannt.
Da die jüdische Geschichte Potsdams bislang nur wenig bekannt ist, entstand ein durch die Stiftung „Erinnerung, Verantwortung und Zukunft“ gefördertes Projekt, in dem sich Schüler*innen des Potsdamer Humboldt-Gymnasiums im Rahmen eines Seminarkurses mit dem jüdischen Erbe ihrer Stadt intensiv auseinandersetzten. Neben einer Annäherung an das Thema über verschiedene, den Friedhof betreffende Sachthemen beschäftigten sich die Jugendlichen mit einzelnen jüdischen Potsdamern, ihren Familienschicksalen und Lebenskonzepten. Ergänzend wurden Aspekte des religiösen Verständnisses von Tod und Trauer im Judentum vorgestellt.
Die Ergebnisse all dieser Ausarbeitungen sind im vorliegenden Lehrmaterial vereinigt und dienen als Anregung für Lehrende und Lernende, die jüdische Geschichte ihrer jeweiligen Heimatorte zu thematisieren.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
Die vorliegende Modularbeit vergleicht die Häufigkeit des Imperativs auf Plakaten der Berliner Abgeordnetenhauswahl 2016 mit der auf den Plakaten der Weimarer Republik. Sie geht dabei der These nach, dass diese Häufigkeit abgenommen hat und kann diese bestätigen: 2016 tritt der Imperativ achtmal seltener (5,7 % zu 45,7 %) auf. Zusätzlich leistet die Arbeit einen Überblick zum Imperativ und weiteren Möglichkeiten, mittels der deutschen Sprache eine Aufforderung zu artikulieren.
Für die Untersuchung wurden zwei Untersuchungskorpora herangezogen, wovon der Korpus mit den Slogans zur Abgeordnetenhauswahl extra für diese Arbeit erstellt wurde und ihr auch beiliegt. In diesem, wie im Korpus zur Weimarer Republik, sind alle die Slogans enthalten und die verwendeten Imperative ausgezählt. So bietet sich ein Einblick in die politische Werbesprache der beiden Zeiten.
The goal of the presented work is to explore the interaction between gold nanorods (GNRs) and hyper-sound waves. For the generation of the hyper-sound I have used Azobenzene-containing polymer transducers. Multilayer polymer structures with well-defined thicknesses and smooth interfaces were built via layer-by-layer deposition. Anionic polyelectrolytes with Azobenzene side groups (PAzo) were alternated with cationic polymer PAH, for the creation of transducer films. PSS/PAH multilayer were built for spacer layers, which do not absorb in the visible light range. The properties of the PAzo/PAH film as a transducer are carefully characterized by static and transient optical spectroscopy. The optical and mechanical properties of the transducer are studied on the picosecond time scale. In particular the relative change of the refractive index of the photo-excited and expanded PAH/PAzo is Δn/n = - 2.6*10‐4. Calibration of the generated strain is performed by ultrafast X-ray diffraction calibrated the strain in a Mica substrate, into which the hyper-sound is transduced. By simulating the X-ray data with a linear-chain-model the strain in the transducer under the excitation is derived to be Δd/d ~ 5*10‐4.
Additional to the investigation of the properties of the transducer itself, I have performed a series of experiments to study the penetration of the generated strain into various adjacent materials. By depositing the PAzo/PAH film onto a PAH/PSS structure with gold nanorods incorporated in it, I have shown that nanoscale impurities can be detected via the scattering of hyper-sound.
Prior to the investigation of complex structures containing GNRs and the transducer, I have performed several sets of experiments on GNRs deposited on a small buffer of PSS/PAH. The static and transient response of GNRs is investigated for different fluence of the pump beam and for different dielectric environments (GNRs covered by PSS/PAH).
A systematic analysis of sample architectures is performed in order to construct a sample with the desired effect of GNRs responding to the hyper-sound strain wave. The observed shift of a feature related to the longitudinal plasmon resonance in the transient reflection spectra is interpreted as the event of GNRs sensing the strain wave. We argue that the shift of the longitudinal plasmon resonance is caused by the viscoelastic deformation of the polymer around the nanoparticle. The deformation is induced by the out of plane difference in strain in the area directly under a particle and next to it. Simulations based on the linear chain model support this assumption. Experimentally this assumption is proven by investigating the same structure, with GNRs embedded in a PSS/PAH polymer layer.
The response of GNRs to the hyper-sound wave is also observed for the sample structure with GNRs embedded in PAzo/PAH films. The response of GNRs in this case is explained to be driven by the change of the refractive index of PAzo during the strain propagation.
The global carbon cycle is closely linked to Earth’s climate. In the context of continuously unchecked anthropogenic CO₂ emissions, the importance of natural CO₂ bond and carbon storage is increasing. An important biogenic mechanism of natural atmospheric CO₂ drawdown is the photosynthetic carbon fixation in plants and the subsequent longterm deposition of plant detritus in sediments.
The main objective of this thesis is to identify factors that control mobilization and transport of plant organic matter (pOM) through rivers towards sedimentation basins. I investigated this aspect in the eastern Nepalese Arun Valley. The trans-Himalayan Arun River is characterized by a strong elevation gradient (205 − 8848 m asl) that is accompanied by strong changes in ecology and climate ranging from wet tropical conditions in the Himalayan forelad to high alpine tundra on the Tibetan Plateau. Therefore, the Arun is an excellent natural laboratory, allowing the investigation of the effect of vegetation cover, climate, and topography on plant organic matter mobilization and export in tributaries along the gradient.
Based on hydrogen isotope measurements of plant waxes sampled along the Arun River and its tributaries, I first developed a model that allows for an indirect quantification of pOM contributed to the mainsetm by the Arun’s tributaries. In order to determine the role of climatic and topographic parameters of sampled tributary catchments, I looked for significant statistical relations between the amount of tributary pOM export and tributary characteristics (e.g. catchment size, plant cover, annual precipitation or runoff, topographic measures). On one hand, I demonstrated that pOMsourced from the Arun is not uniformly derived from its entire catchment area. On the other, I showed that dense vegetation is a necessary, but not sufficient, criterion for high tributary pOM export. Instead, I identified erosion and rainfall and runoff as key factors controlling pOM sourcing in the Arun Valley. This finding is supported by terrestrial cosmogenic nuclide concentrations measured on river sands along the Arun and its tributaries in order to quantify catchment wide denudation rates. Highest denudation rates corresponded well with maximum pOM mobilization and export also suggesting the link between erosion and pOM sourcing.
The second part of this thesis focusses on the applicability of stable isotope records such as plant wax n-alkanes in sediment archives as qualitative and quantitative proxy for the variability of past Indian Summer Monsoon (ISM) strength. First, I determined how ISM strength affects the hydrogen and oxygen stable isotopic composition (reported as δD and δ18O values vs. Vienna Standard Mean Ocean Water) of precipitation in the Arun Valley and if this amount effect (Dansgaard, 1964) is strong enough to be recorded in potential paleo-ISM isotope proxies. Second, I investigated if potential isotope records across the Arun catchment reflect ISM strength dependent precipitation δD values only, or if the ISM isotope signal is superimposed by winter precipitation or glacial melt. Furthermore, I tested if δD values of plant waxes in fluvial deposits reflect δD values of environmental waters in the respective catchments.
I showed that surface water δD values in the Arun Valley and precipitation δD from south of the Himalaya both changed similarly during two consecutive years (2011 & 2012) with distinct ISM rainfall amounts (~20% less in 2012). In order to evaluate the effect of other water sources (Winter-Westerly precipitation, glacial melt) and evapotranspiration in the Arun Valley, I analysed satellite remote sensing data of rainfall distribution (TRMM 3B42V7), snow cover (MODIS MOD10C1), glacial coverage (GLIMSdatabase, Global Land Ice Measurements from Space), and evapotranspiration (MODIS MOD16A2). In addition to the predominant ISM in the entire catchment I found through stable isotope analysis of surface waters indications for a considerable amount of glacial melt derived from high altitude tributaries and the Tibetan Plateau. Remotely sensed snow cover data revealed that the upper portion of the Arun also receives considerable winter precipitation, but the effect of snow melt on the Arun Valley hydrology could not be evaluated as it takes place in early summer, several months prior to our sampling campaigns. However, I infer that plant wax records and other potential stable isotope proxy archives below the snowline are well-suited for qualitative, and potentially quantitative, reconstructions of past changes of ISM strength.
Background
Doping presents a potential health risk for young athletes. Prevention programs are intended to prevent doping by educating athletes about banned substances. However, such programs have their limitations in practice. This led Germany to introduce the National Doping Prevention Plan (NDPP), in hopes of ameliorating the situation among young elite athletes. Two studies examined 1) the degree to which the NDPP led to improved prevention efforts in elite sport schools, and 2) the extent to which newly developed prevention activities of the national anti-doping agency (NADA) based on the NDPP have improved knowledge among young athletes within elite sports schools.
Methods
The first objective was investigated in a longitudinal study (Study I: t0 = baseline, t1 = follow-up 4 years after NDPP introduction) with N = 22 teachers engaged in doping prevention in elite sports schools. The second objective was evaluated in a cross-sectional comparison study (Study II) in N = 213 elite sports school students (54.5 % male, 45.5 % female, age M = 16.7 ± 1.3 years (all students had received the improved NDDP measure in school; one student group had received additionally NADA anti-doping activities and a control group did not). Descriptive statistics were calculated, followed by McNemar tests, Wilcoxon tests and Analysis of Covariance (ANCOVA).
Results
Results indicate that 4 years after the introduction of the NDPP there have been limited structural changes with regard to the frequency, type, and scope of doping prevention in elite sport schools. On the other hand, in study II, elite sport school students who received further NADA anti-doping activities performed better on an anti-doping knowledge test than students who did not take part (F(1, 207) = 33.99, p <0.001), although this difference was small.
Conclusion
The integration of doping-prevention in elite sport schools as part of the NDPP was only partially successful. The results of the evaluation indicate that the introduction of the NDPP has contributed more to a change in the content of doping prevention activities than to a structural transformation in anti-doping education in elite sport schools. Moreover, while students who did receive additional education in the form of the NDPP“booster sessions” had significantly more knowledge about doping than students who did not receive such education, this difference was only small and may not translate to actual behavior.
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.
Loss to follow-up in a randomized controlled trial study for pediatric weight management (EPOC)
(2016)
Background
Attrition is a serious problem in intervention studies. The current study analyzed the attrition rate during follow-up in a randomized controlled pediatric weight management program (EPOC study) within a tertiary care setting.
Methods
Five hundred twenty-three parents and their 7–13-year-old children with obesity participated in the randomized controlled intervention trial. Follow-up data were assessed 6 and 12 months after the end of treatment. Attrition was defined as providing no objective weight data. Demographic and psychological baseline characteristics were used to predict attrition at 6- and 12-month follow-up using multivariate logistic regression analyses.
Results
Objective weight data were available for 49.6 (67.0) % of the children 6 (12) months after the end of treatment. Completers and non-completers at the 6- and 12-month follow-up differed in the amount of weight loss during their inpatient stay, their initial BMI-SDS, educational level of the parents, and child’s quality of life and well-being. Additionally, completers supported their child more than non-completers, and at the 12-month follow-up, families with a more structured eating environment were less likely to drop out. On a multivariate level, only educational background and structure of the eating environment remained significant.
Conclusions
The minor differences between the completers and the non-completers suggest that our retention strategies were successful. Further research should focus on prevention of attrition in families with a lower educational background.
The strong adhesion of sub-micron sized particles to surfaces is a nuisance, both for removing contaminating colloids from surfaces and for conscious manipulation of particles to create and test novel micro/nano-scale assemblies. The obvious idea of using detergents to ease these processes suffers from a lack of control: the action of any conventional surface-modifying agent is immediate and global. With photosensitive azobenzene containing surfactants we overcome these limitations. Such photo-soaps contain optical switches (azobenzene molecules), which upon illumination with light of appropriate wavelength undergo reversible trans-cis photo-isomerization resulting in a subsequent change of the physico-chemical molecular properties. In this work we show that when a spatial gradient in the composition of trans- and cis- isomers is created near a solid-liquid interface, a substantial hydrodynamic flow can be initiated, the spatial extent of which can be set, e.g., by the shape of a laser spot. We propose the concept of light induced diffusioosmosis driving the flow, which can remove, gather or pattern a particle assembly at a solid-liquid interface. In other words, in addition to providing a soap we implement selectivity: particles are mobilized and moved at the time of illumination, and only across the illuminated area.
Referential Choice
(2016)
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
We study the adsorption–desorption transition of polyelectrolyte chains onto planar, cylindrical and spherical surfaces with arbitrarily high surface charge densities by massive Monte Carlo computer simulations. We examine in detail how the well known scaling relations for the threshold transition—demarcating the adsorbed and desorbed domains of a polyelectrolyte near weakly charged surfaces—are altered for highly charged interfaces. In virtue of high surface potentials and large surface charge densities, the Debye–Hückel approximation is often not feasible and the nonlinear Poisson–Boltzmann approach should be implemented. At low salt conditions, for instance, the electrostatic potential from the nonlinear Poisson–Boltzmann equation is smaller than the Debye–Hückel result, such that the required critical surface charge density for polyelectrolyte adsorption σc increases. The nonlinear relation between the surface charge density and electrostatic potential leads to a sharply increasing critical surface charge density with growing ionic strength, imposing an additional limit to the critical salt concentration above which no polyelectrolyte adsorption occurs at all. We contrast our simulations findings with the known scaling results for weak critical polyelectrolyte adsorption onto oppositely charged surfaces for the three standard geometries. Finally, we discuss some applications of our results for some physical–chemical and biophysical systems.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Die Elbe und ihr Einzugsgebiet sind vom Klimawandel betroffen. Um die Wirkkette von projizierten Klimaveränderungen auf den Wasserhaushalt und die daraus resultierenden Nährstoffeinträge und -frachten für große Einzugsgebiete wie das der Elbe zu analysieren, können integrierte Umweltmodellsysteme eingesetzt werden. Fallstudien, die mit diesen Modellsystemen ad hoc durchgeführt werden, repräsentieren den Istzustand von Modellentwicklungen und -unsicherheiten und sind damit statisch.
Diese Arbeit beschreibt den Einstieg in die Dynamisierung von Klimafolgenanalysen im Elbegebiet. Dies umfasst zum einen eine Plausibilitätsprüfung von Auswirkungsrechnungen, die mit Szenarien des statistischen Szenariengenerators STARS durchgeführt wurden, durch den Vergleich mit den Auswirkungen neuerer Klimaszenarien aus dem ISI-MIP Projekt, die dem letzten Stand der Klimamodellierung entsprechen. Hierfür wird ein integriertes Modellsystem mit "eingefrorenem Entwicklungsstand" verwendet. Die Klimawirkungsmodelle bleiben dabei unverändert. Zum anderen wird ein Bestandteil des integrierten Modellsystems – das ökohydrologische Modell SWIM – zu einer "live"-Version weiterentwickelt. Diese wird durch punktuelle Testung an langjährigen Versuchsreihen eines Lysimeterstandorts sowie an aktuellen Abflussreihen validiert und verbessert.
Folgende Forschungsfragen werden bearbeitet: (i) Welche Effekte haben unterschiedliche Klimaszenarien auf den Wasserhaushalt im Elbegebiet und ist eine Neubewertung der Auswirkung des Klimawandels auf den Wasserhaushalt notwendig?, (ii) Was sind die Auswirkungen des Klimawandels auf die Nährstoffeinträge und -frachten im Elbegebiet sowie die Wirksamkeit von Maßnahmen zur Reduktion der Nährstoffeinträge?, (iii) Ist unter der Nutzung (selbst einer sehr geringen Anzahl) verfügbarer tagesaktueller Witterungsdaten in einem stark heterogenen Einzugsgebiet eine valide Ansprache der aktuellen ökohydrologischen Situation des Elbeeinzugsgebiets möglich?
Die aktuellen Szenarien bestätigen die Richtung, jedoch nicht das Ausmaß der Klimafolgen: Die Rückgänge des mittleren jährlichen Gesamtabflusses und der monatlichen Abflüsse an den Pegeln bis Mitte des Jahrhunderts betragen für das STARS-Szenario ca. 30 %. Die Rückgänge bei den auf dem ISI-MIP-Szenario basierenden Modellstudien liegen hingegen nur bei ca. 10 %. Hauptursachen für diese Divergenz sind die Unterschiede in den Niederschlagsprojektionen sowie die Unterschiede in der jahreszeitlichen Verteilung der Erwärmung. Im STARS-Szenario gehen methodisch bedingt die Niederschläge zurück und der Winter erwärmt sich stärker als der Sommer. In dem ISI-MIP-Szenario bleiben die Niederschläge nahezu stabil und die Erwärmung im Sommer und Winter unterscheidet sich nur geringfügig.
Generell nehmen die Nährstoffeinträge und -frachten mit den Abflüssen in beiden Szenarien unterproportional ab, wobei die Frachten jeweils stärker als die Einträge zurückgehen. Die konkreten Effekte der Abflussänderungen sind gering und liegen im einstelligen Prozentbereich. Gleiches gilt für die Unterschiede zwischen den Szenarien. Der Effekt von zwei ausgewählten Maßnahmen zur Reduktion der Nährstoffeinträge und -frachten unterscheidet sich bei verschiedenen Abflussverhältnissen, repräsentiert durch unterschiedliche Klimaszenarien in unterschiedlich feuchter Ausprägung, ebenfalls nur geringfügig.
Die Beantwortung der ersten beiden Forschungsfragen zeigt, dass die Aktualisierung von Klimaszenarien in einem ansonsten "eingefrorenen" Verbund von ökohydrologischen Daten und Modellen eine wichtige Prüfoption für die Plausibilisierung von Klimafolgenanalysen darstellt. Sie bildet die methodische Grundlage für die Schlussfolgerung, dass bei der Wassermenge eine Neubewertung der Klimafolgen notwendig ist, während dies bei den Nährstoffeinträgen und -frachten nicht der Fall ist.
Die zur Beantwortung der dritten Forschungsfrage mit SWIM-live durchgeführten Validierungsstudien ergeben Diskrepanzen am Lysimeterstandort und bei den Abflüssen aus den Teilgebieten Saale und Spree. Sie lassen sich zum Teil mit der notwendigen Interpolationsweite der Witterungsdaten und dem Einfluss von Wasserbewirtschaftungsmaßnahmen erklären. Insgesamt zeigen die Validierungsergebnisse, dass schon die Pilotversion von SWIM-live für eine ökohydrologische Ansprache des Gebietswasserhaushaltes im Elbeeinzugsgebiet genutzt werden kann. SWIM-live ermöglicht eine unmittelbare Betrachtung und Beurteilung simulierter Daten. Dadurch werden Unsicherheiten bei der Modellierung direkt offengelegt und können infolge dessen reduziert werden. Zum einen führte die Verdichtung der meteorologischen Eingangsdaten durch die Verwendung von nun ca. 700 anstatt 19 Klima- bzw. Niederschlagstationen zu einer Verbesserung der Ergebnisse. Zum anderen wurde SWIM-live beispielhaft für einen Zyklus aus punktueller Modellverbesserung und flächiger Überprüfung der Simulationsergebnisse genutzt.
Die einzelnen Teilarbeiten tragen jeweils zur Dynamisierung von Klimafolgenanalysen im Elbegebiet bei. Der Anlass hierfür war durch die fehlerhaften methodischen Grundlagen von STARS gegeben. Die Sinnfälligkeit der Dynamisierung ist jedoch nicht an diesen konkreten Anlass gebunden, sondern beruht auf der grundlegenden Einsicht, dass Ad-hoc-Szenarienanalysen immer auch pragmatische Vereinfachungen zugrunde liegen, die fortlaufend überprüft werden müssen.
Ein Blick zurück
(2016)
1 Einleitung, 2 Die Entstehung des organisierten Turnens in Deutschland, 3 Vom Turnen zum Sport, 4 Friedrich Ludwig Jahn und die Herausbildung der deutschen Turnersprache, 5 Sportsprache in der Zeit des Nationalsozialismus, 6 Didaktische Anregungen, 7 Materialien und Diskussionsanregungen, 8 Literatur
Mit Korbmachern zum Sieg
(2016)
1 Hinführung und Zielstellung, 2 Angestrebte Ergebnisse der Entwicklung lexikalischer Kompetenz - vernetzt mit der Entwicklung von Lese-/Textverstehenskompetenz, 3 Arbeit am Wortschatz und Textverstehen - Textanalyse als das Erschließen eines Feldes von Möglichkeiten, 4 Die Tätigkeit des Schülers optimal in Gang setzen - handlungs-regulierende Aufgaben stellen, 5 Literatur
Rezensiertes Werk:
Anne-Katrin Henkel / Thomas Rahe (Hrsg.): Publizistik in jüdischen Displaced-Persons-Camps. Charakteristika, Medien und bibliothekarische Überlieferung, Zeitschrift für Bibliothekswesen und Bibliographie. Sonderbände, Bd. 112, Frankfurt am Main: Vittorio Klostermann Verlag 2014. 194 S.
Rezensiertes Werk:
Naphtali Herz Wessely: Worte des Friedens und der Wahrheit. Dokumente einer Kontroverse über Erziehung in der europäischen Spätaufklärung. Herausgegeben, eingeleitet und kommentiert von Ingrid Lohmann, mitherausgegeben von Rainer Wenzel / Uta Lohmann. Aus dem Hebräischen übersetzt und mit Anmerkungen versehen von Rainer Wenzel, Jüdische Bildungsgeschichte in Deutschland, Bd. 8, Münster: Waxmann 2014. 800 S.
Duldung und Diskriminierung
(2016)
Die Worte „entjuden“ und „Entjudung“ sind sprachlicher Ausdruck zumeist judenfeindlicher Haltungen und Taten in der deutschen Geschichte. Der Beitrag zeichnet die Entwicklung des Begriffs anhand seiner Verwendungszusammenhänge nach. Im Kontext der Assimilation des beginnenden 19. Jahrhunderts meinte der Terminus, dass man sich jener jüdischen „Eigenheit“ zu entkleiden habe, welche als Postulat gemeinhin Konsens war. Innerhalb der innerjüdischen Diskussion wird „Entjudung“ zu Beginn des 20. Jahrhunderts zum diagnostischen Ausdruck des Identitätsverlustes. Als politischer Kampfbegriff der Nationalsozialisten ist er wiederum zum Synonym für die Entrechtung und Vernichtung jüdischer Menschen geworden. Protestantische Theologen verwendeten diesen Begriff in der Debatte um die Erneuerung des Christentums, was durch die Entfernung jüdischer Einflüsse geschehen sollte. Bereits Ende des 18. Jahrhunderts formuliert, findet diese Forderung in der 1939 erfolgten Gründung des Instituts zur Erforschung und Beseitigung des jüdischen Einflusses auf das deutsche kirchliche Leben seine programmatische Umsetzung.
Archaeology can be understood as a tool used in the process of identity formation,
contributing to a sense of belonging and unity within a diverse set of communities.
Research was conducted with the intention of analyzing the wide range of perceptions
regarding archaeological sites in the mixed city of Lod, Israel. I explored the impact of
urban cultural heritage on shaping the identity of local Jewish and Arab children, who
were chosen as the youngest active members of society living in the city, and who
participated in the 2013 archaeological excavation season at the Khan al-Hilu. Israel is
an ideal location for such research, due to its nature as simultaneously being the focus
of extensive archaeological excavations as well as being the setting of an intractable conflict. Ancestral attachment to the land serves as a foundation for the collective
identity of both Jews and Arabs. Yet, each community and individual may relate differently
to the surrounding archaeological sites, which is further shaped by their sense of
societal hierarchy and cultural heritage.
After the mass immigration to Israel from 1948 to 1950, about 2000 Jews remained
in Yemen. These Jews lived in small communities and continued to maintain their
religious environment as it was. In the years that followed, many of them, however, moved from Yemen to Israel with the assistance of the Jewish Agency and the Joint
Distribution Committee (JDC). The community was of a small size and the fact that it
was dispersed throughout the predominantly Muslim areas, created a certain closeness
between the two groups. About ten percent of the Jews chose to convert to Islam, many
of them in groups. In about twenty cases, the husbands chose to convert to Islam while
their wives emigrated to preserve their Judaism. Some of the converts refused to grant
their wives a divorce, because, according to Muslim law, conversion is enough to sever
the marital relationship. This procedure is called ʿAgunot. Meaning, women bound in
marriage to a husband and they no longer lived together, but the husband didn’t formally
‘released’ her from marriage union. The article follows the efforts undertaken
to release the ʿAgunot, and shows that Jewish and Muslim scholars were able to find
solutions to the ʿAgunot problem and, at times, managed to bridge the gap between the
two religions.
In this article we will present a few examples of the theme of “calling for help and redemption” in Arabic and Hebrew poetry, with particular focus on eleventh and twelfth century Muslim Spain. More particularly, we will offer a glimpse into the life and oeuvre of two medieval poets (one Muslim, one Jewish); both were active in Muslim Spain in the same period and shared a similar fate of exile and wandering: on the one hand, the Sicilian Arabic poet Ibn Ḥamdīs (c. 1056–c. 1133) and on the other hand, the Spanish Jewish poet Moses ibn Ezra (1055–1138). We will take into account the impact of exile and wandering on the profusion of the theme of “calling for help and redemption” as well as the related theme of “yearning for one’s homeland” through an analysis and comparison of poetic fragments by the two aforementioned poets as well as additional Andalusian Jewish (Judah ha-Levi) and Muslim (Ibn Khafāja, al-Rundī and Ibn al-Abbār) poets.
The concept of three journeys as a way to denote spiritual development was introduced
by Dhu al-Nun, one of the founding fathers of Islamic mysticism. The use of this
concept was later refined by combining it with the Sufi technique of adding different
prepositions to a certain term, in order to differentiate between spiritual stages. By
using the words journey (Safar) and God (Allah) and inserting a preposition before the
word God, Sufi writers could map the different roads to God or the stations (Maqamat) on this road. Ibn al-'Arabi, in the beginning of the thirteenth century, speaks of three
different ways: from God, toward God and in God. Tanchum ha-Yerushalmi, the Judeo
Arabic biblical commentator from the end of this century, speaks of the three journeys
as three stations of one continuous way. A nearly identical description we can find in
the writing of the Muslim scholar Ibn Qayyim al-Jawziyya, a generation later. Later in
the fourteenth century, in the writing of the Sufi writer al-Qashani, the three travels
become four, although the scheme of three prepositions is preserved. Near the end of
the fourteenth century, in the writings of R. David ha-Nagid, we find only two journeys:
to God and in God. All this tells us that Judeo Arabic literature can help us map
with greater precision the historical development of Sufi ideas.
In diesem Artikel wird ein vergleichender Einblick in die jüdische Responsen Literatur und in die muslimische Fatwa-Literatur gegeben und herausgearbeitet, welche Fragen sich für weiterführende Studien ergeben. Beide Religionen haben ein normatives Bezugssystem (halacha und fiqh), das sich auf alle Bereiche des Lebens erstreckt. Die klassische Position beider Religionen sieht in der Ausübung dieser Normen den authentischsten Weg, Gottes Willen näherzukommen. Nach traditioneller Auffassung benötigen religiöse Menschen dabei eine permanente Supervision durch vertrauenswürdige Gelehrte, die sie bei Bedarf um Rat bitten können. Die große Zahl der Fragen, die Gelehrten – über das Internet, den Briefverkehr oder das Telefon – gestellt werden, zeigt einen auch in der Gegenwart ungebrochenen Bedarf an fachkundigen Auskünften im Bereich religiöser Normen. Im vorliegenden Artikel sollen die Grundzüge dieses Prozesses religiöser Rechtsauskünfte im Judentum und Islam vergleichend dargestellt werden. Dabei können an dieser Stelle nur die bedeutendsten Momente festgehalten und auf Gemeinsamkeiten und Unterschiede hin betrachtet werden. Als Methode dient die historische Analyse, bei der die Fatwa- und die Responsen-Literatur in ihrer klassischen Form und in Grundzügen dargestellt wird, so wie sie sich vom 7. bis ins 19. Jahrhundert gezeigt hat.
Im vorliegenden Beitrag werden einige zentrale Berichte und Motive aus den frühen Quellen des Islam über die militärischen Konflikte des Propheten Muhammad mit den
Juden von Medina beleuchtet. Als Grundlage der Untersuchung dient die Prophetenbiografie des Gelehrten Muḥammad ibn Isḥāq (gest. 150 nach der Hedschra), die auch heute noch maßgeblich ist. Im Beitrag wird unter anderem aufgezeigt, dass es sowohl innerhalb der Gattung der Sīra-Literatur, der Ibn Isḥāqs Werk angehört, als auch in den frühen Traditionen der islamischen Rechtswissenschaft, der Koranexegese sowie im Korantext selbst zahlreiche Hinweise auf alternative Darstellungen dieser Konflikte gibt. Diese gerieten in den ersten Jahrhunderten des Islam infolge des Siegeszuges von Ibn Isḥāqs Werk zunehmend aus dem Blickfeld, sind aber für zeitgenössische Diskurse um das Verhältnis des Islam zu Nichtmuslimen durchaus von Interesse. Ziel der Untersuchung ist es die normative Aussagekraft der unterschiedlichen Szenarien für Grundsatzfragen insbesondere für das Verhältnis zwischen Muslimen und Juden herauszuarbeiten. Einen inhaltlichen Schwerpunkt im Beitrag bilden dabei unterschiedliche Zugänge zum berühmten Bericht über die Vernichtung des jüdischen Stammes der Banū Qurayza im Anschluss an die Grabenschlacht.
Die Rezeption des Propheten Jona im Koran setzt dessen biblischen Narrativ im Wesentlichen voraus und deutet diesen vor allem dort aus, wo man um eine Korrektur seines Prophetenbildes bemüht ist. Im Fokus stehen dabei die Buße, Umkehr und Erlösung Yūnus’ und seines Volkes. Nachkoranische Prophetenerzählungen (qisas alanbiyā’) füllen die narrativen Leerstellen der ‚Jona-Suren‘ wiederum mit erklärendem Erzählmaterial auf und schöpfen dafür auch aus dem umfangreichen Fundus biblischer und rabbinischer Traditionen, die sie sich im äußeren Rahmen der koranischen Yūnus Überlieferung schöpferisch zu eigen machen. So entstehen Erzählkompositionen, die sich als dialogische Auseinandersetzung mit religiösen Themen von gemeinsamer Relevanz lesen lassen. Der Artikel reflektiert gezielt Entwicklung und Verhältnis der Rezeptionen Jonas im Koran sowie in den Prophetenerzählungen von Ibn-Muhammad at-Ta‛labī und Muhammad ibn ‛Abd Allāh al-Kisā’i, in stetiger Auseinandersetzung mit der jüdischen Jona-Tradition.
The figure of Moses constitutes an important link between Jewish and Muslim traditions.
Muslims consider him to be one of the five elite prophets of God, his story therefore
has a prominent place in the Qurʼan. While there are minor differences, the story
of Moses found in the Qurʼan confirms the account of the Torah; the life of Moses thus
is considered a model for all Muslims to follow. Though elements of his story are found throughout the Qurʼan, it is in chapter 7 where it is given in its greatest detail. As the
focus point of this article, chapter 7 discusses many events in Mosesʼ life, which are important for both Muslims and Jews, and reveals his great importance and Godliness. It also demonstrates how truly similar Islamʼs Moses and Judaismʼs Moses are. Therefore,
through an examination of the various elements of the story of Moses as found in
the Qurʼan, this article will show how by following him, Jews and Muslims can come
together in friendship, harmony and peace. Moses is the common ground on which
Jews and Muslims can come together in order to open up a dialogue and further their
shared commitment to the worship of the One God.
PaRDeS. Zeitschrift der Vereinigung für Jüdische Studien e.V., möchte die fruchtbare und facettenreiche Kultur des Judentums sowie seine Berührungspunkte zur Umwelt in den unterschiedlichen Bereichen dokumentieren. Daneben dient die Zeitschrift als Forum zur Positionierung der Fächer Jüdische Studien und Judaistik innerhalb des wissenschaftlichen Diskurses sowie zur Diskussion ihrer historischen und gesellschaftlichen Verantwortung.
Ein Deutschunterricht, der die Alltags- und Medienkultur der Schüler und Schülerinnen ernst nimmt, darf Sporttexte nicht unberücksichtigt lassen. Zu sehr ist der Sport in all seinen Facetten Teil der Lebenswelt vieler Schülerinnen und Schüler geworden. Die Frage ist nicht mehr, ob der Deutschunterricht darauf zu reagieren hat, die Frage ist vielmehr, wie er dies tun und welche Sporttexte er dabei nutzen kann.
Auch wenn die Suche nach sinnvollen Bezügen zwischen Sport und Deutschunterricht schon seit längerem intensiv betrieben wird, offenbart das vielschichtige Kulturphänomen „Sport“ immer wieder neue interessante Seiten, die es lohnen, fachdidaktisch behandelt zu werden.
Die zehn Beiträge in diesem Band verstehen sich als Unterrichtsanregungen für den kompetenzorientierten Deutschunterricht. Sie bedienen Betrachtungen zum Sport aus literarischer, sprachlicher und medialer Perspektive. Die theoretisch-begrifflichen Aspekte der jeweiligen Themen werden soweit behandelt, wie sie für das Verständnis erforderlich sind. Im Zentrum vieler Beiträge stehen Unterrichtsszenarien mit kommentierten Texten und Aufgaben, die für die Unterrichtsvorbereitung oder für den Unterricht selbst genutzt werden können.
Für alle Organismen ist die Aufrechterhaltung ihres energetischen Gleichgewichts unter fluktuierenden Umweltbedingungen lebensnotwendig. In Eukaryoten steuern evolutionär konservierte Proteinkinasen, die in Pflanzen als SNF1-RELATED PROTEIN KINASE1 (SnRK1) bezeichnet werden, die Adaption an Stresssignale aus der Umwelt und an die Limitierung von Nährstoffen und zellulärer Energie. Die Aktivierung von SnRK1 bedingt eine umfangreiche transkriptionelle Umprogrammierung, die allgemein zu einer Repression energiekonsumierender Prozesse wie beispielsweise Zellteilung und Proteinbiosynthese und zu einer Induktion energieerzeugender, katabolischer Stoffwechselwege führt. Wie unterschiedliche Signale zu einer generellen sowie teilweise gewebe- und stressspezifischen SnRK1-vermittelten Antwort führen ist bisher noch nicht ausreichend geklärt, auch weil bislang nur wenige Komponenten der SnRK1-Signaltransduktion identifiziert wurden. In dieser Arbeit konnte ein Protein-Protein-Interaktionsnetzwerk um die SnRK1αUntereinheiten aus Arabidopsis AKIN10/AKIN11 etabliert werden. Dadurch wurden zunächst Mitglieder der pflanzenspezifischen DUF581-Proteinfamilie als Interaktionspartner der SnRK1α-Untereinheiten identifiziert. Diese Proteine sind über ihre konservierte DUF581Domäne, in der ein Zinkfinger-Motiv lokalisiert ist, fähig mit AKIN10/AKIN11 zu interagieren. In planta Ko-Expressionsanalysen zeigten, dass die DUF581-Proteine eine Verschiebung der nucleo-cytoplasmatischen Lokalisierung von AKIN10 hin zu einer nahezu ausschließlichen zellkernspezifischen Lokalisierung begünstigen sowie die Ko-Lokalisierung von AKIN10 und DUF581-Proteinen im Nucleus. In Bimolekularen Fluoreszenzkomplementations-Analysen konnte die zellkernspezifische Interaktion von DUF581-Proteinen mit SnRK1α-Untereinheiten in planta bestätigt werden. Außerhalb der DUF581-Domäne weisen die Proteine einander keine große Sequenzähnlichkeit auf. Aufgrund ihrer Fähigkeit mit SnRK1 zu interagieren, dem Fehlen von SnRK1Phosphorylierungsmotiven sowie ihrer untereinander sehr variabler gewebs-, entwicklungs- und stimulusspezifischer Expression wurde für DUF581-Proteine eine Funktion als Adaptoren postuliert, die unter bestimmten physiologischen Bedingungen spezifische Substratproteine in den SnRK1-Komplex rekrutieren. Auf diese Weise könnten DUF581Proteine die Interaktion von SnRK1 mit deren Zielproteinen modifizieren und eine Feinjustierung der SnRK1-Signalweiterleitung ermöglichen. Durch weiterführende Interaktionsstudien konnten DUF581-interagierende Proteine darunter Transkriptionsfaktoren, Proteinkinasen sowie regulatorische Proteine gefunden werden, die teilweise ebenfalls Wechselwirkungen mit SnRK1α-Untereinheiten aufzeigten. Im Rahmen dieser Arbeit wurde eines dieser Proteine für das eine Beteiligung an der SnRK1Signalweiterleitung als Transkriptionsregulator vermutet wurde näher charakterisiert. STKR1 (STOREKEEPER RELATED 1), ein spezifischer Interaktionspartner von DUF581-18, gehört zu einer pflanzenspezifischen Leucin-Zipper-Transkriptionsfaktorfamilie und interagiert in Hefe sowie in planta mit SnRK1. Die zellkernspezifische Interaktion von STKR1 und AKIN10 in Pflanzen unterstützt die Vermutung der kooperativen Regulation von Zielgenen. Weiterhin stabilisierte die Anwesenheit von AKIN10 die Proteingehalte von STKR1, das wahrscheinlich über das 26S Proteasom abgebaut wird. Da es sich bei STKR1 um ein Phosphoprotein mit SnRK1-Phosphorylierungsmotiv handelt, stellt es sehr wahrscheinlich ein SnRK1-Substrat dar. Allerdings konnte eine SnRK1-vermittelte Phosphorylierung von STKR1 in dieser Arbeit nicht gezeigt werden. Der Verlust von einer Phosphorylierungsstelle beeinflusste die Homo- und Heterodimerisierungsfähigkeit von STKR1 in Hefeinteraktionsstudien, wodurch eine erhöhte Spezifität der Zielgenregulation ermöglicht werden könnte. Außerdem wurden Arabidopsis-Pflanzen mit einer veränderten STKR1-Expression phänotypisch, physiologisch und molekularbiologisch charakterisiert. Während der Verlust der STKR1-Expression zu Pflanzen führte, die sich kaum von Wildtyp-Pflanzen unterschieden, bedingte die konstitutive Überexpression von STKR1 ein stark vermindertes Pflanzenwachstum sowie Entwicklungsverzögerungen hinsichtlich der Blühinduktion und Seneszenz ähnlich wie sie auch bei SnRK1α-Überexpression beschrieben wurden. Pflanzen dieser Linien waren nicht in der Lage Anthocyane zu akkumulieren und enthielten geringere Gehalte an Chlorophyll und Carotinoiden. Neben einem erhöhten nächtlichen Stärkeumsatz waren die Pflanzen durch geringere Saccharosegehalte im Vergleich zum Wildtyp gekennzeichnet. Eine Transkriptomanalyse ergab, dass in den STKR1-überexprimierenden Pflanzen unter Energiemangelbedingungen, hervorgerufen durch eine verlängerte Dunkelphase, eine größere Anzahl an Genen im Vergleich zum Wildtyp differentiell reguliert war als während der Lichtphase. Dies spricht für eine Beteiligung von STKR1 an Prozessen, die während der verlängerten Dunkelphase aktiv sind. Ein solcher ist beispielsweise die SnRK1-Signaltransduktion, die unter energetischem Stress aktiviert wird. Die STKR1Überexpression führte zudem zu einer verstärkten transkriptionellen Induktion von Abwehrassoziierten Genen sowie NAC- und WRKY-Transkriptionsfaktoren nach verlängerter Dunkelphase. Die Transkriptomdaten deuteten auf eine stimulusunabhängige Induktion von Abwehrprozessen hin und konnten eine Erklärung für die phänotypischen und physiologischen Auffälligkeiten der STKR1-Überexprimierer liefern.
Vom 18. bis 20. September 2014 versammelten sich an der Universität Potsdam kultur- und filmwissenschaftlich arbeitende Wissenschaftler zu einem Andrej Tarkovskij gewidmeten Symposium, dem ersten internationalen. Die 25 Teilnehmer kamen nämlich aus neun Ländern. Dadurch, dass nicht wenige auch eine – wie man heute sagt – „Migrationsbiographie“ haben, potenzierte sich die durch die jeweils unterschiedliche Herkunft bedingte Multiperspektivik, zu der jedoch der Modus der Wissenschaftlichkeit ein deutlich relativierendes Korrektiv bildet. Der vorliegende Band enthält im Wesentlichen die dort vorgestellten Beiträge, aber auch die der Fachleute, die nicht persönlich hatten nach Potsdam kommen können.