Refine
Has Fulltext
- yes (294) (remove)
Year of publication
- 2013 (294) (remove)
Document Type
- Doctoral Thesis (110)
- Postprint (63)
- Monograph/Edited Volume (45)
- Article (19)
- Part of Periodical (18)
- Preprint (12)
- Master's Thesis (10)
- Conference Proceeding (8)
- Review (5)
- Bachelor Thesis (2)
Language
- English (165)
- German (126)
- Multiple languages (1)
- Romanian (1)
- Sorbian (1)
Is part of the Bibliography
- yes (294) (remove)
Keywords
- Opposition (4)
- Autoritarismus (3)
- Brandenburger Antike-Denkwerk (3)
- E-Learning (3)
- Fachdidaktik Latein (3)
- Komparatistik (3)
- Latein (3)
- Modellierung (3)
- Potsdamer Lateintag (3)
- Wissenschaftsgeschichte (3)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (31)
- Institut für Chemie (27)
- Institut für Geowissenschaften (25)
- Wirtschaftswissenschaften (18)
- Institut für Biochemie und Biologie (15)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
- Institut für Physik und Astronomie (13)
- Department Linguistik (12)
- Institut für Umweltwissenschaften und Geographie (11)
- Department Psychologie (10)
The dynamics of external contributions to the geomagnetic field is investigated by applying time-frequency methods to magnetic observatory data. Fractal models and multiscale analysis enable obtaining maximum quantitative information related to the short-term dynamics of the geomagnetic field activity. The stochastic properties of the horizontal component of the transient external field are determined by searching for scaling laws in the power spectra. The spectrum fits a power law with a scaling exponent β, a typical characteristic of self-affine time-series. Local variations in the power-law exponent are investigated by applying wavelet analysis to the same time-series. These analyses highlight the self-affine properties of geomagnetic perturbations and their persistence. Moreover, they show that the main phases of sudden storm disturbances are uniquely characterized by a scaling exponent varying between 1 and 3, possibly related to the energy contained in the external field. These new findings suggest the existence of a long-range dependence, the scaling exponent being an efficient indicator of geomagnetic activity and singularity detection. These results show that by using magnetogram regularity to reflect the magnetosphere activity, a theoretical analysis of the external geomagnetic field based on local power-law exponents is possible.
In der vorliegenden Diplomarbeit wird untersucht welche Wirkungen der Industriegüterhandel auf die Beschäftigung im Verarbeitenden Gewerbe Indiens hat. Dazu werden die Implikationen der handelstheoretischen Modelle der Neoklassik, der Neuen sowie der Neu-Neuen Handelstheorie abgeleitet und erörtert. Es schließt sich eine empirische Analyse an, die sich an Jenkins und Sen (2006) orientiert. Dabei werden zunächst der Faktorgehalt sowie die Handelsstruktur analysiert. Um die Beschäftigungseffekte zu quantifizieren, erfolgt eine Zerlegung des Beschäftigungswachstums. Es wird auch untersucht, inwiefern die handelsinduzierte Wettbewerbsintensivierung zu einem effizienteren Arbeitseinsatz geführt hat. Die Ergebnisse zeigen, dass die handelsinduzierten Beschäftigungseffekte im Beobachtungszeitraum zwar positiv, aber vergleichsweise gering ausgefallen sind. Gleichzeitig wirkt sich die Entwicklung der Handelsstruktur zunehmend negativ auf das potentielle Beschäftigungswachstum aus, sodass auf Basis der hier gewonnenen Erkenntnisse nicht davon auszugehen ist, dass zukünftige Handelsflüsse einen signifikanten Beitrag zur Schaffung neuer Beschäftigungsmöglichkeiten leisten können.
As Albania is accelerating its preparations towards the European Union candidate status, numerous areas of public policy and practices undergo intensive development processes. Regional development policy is a very new area of public policy in Albania, and needs research and development. This study focuses on the process of sustainable development in Albania, by analyzing and comparing the regional development of regions of Tirana, Shkodra and Kukes. The methodology used consists of a literature/desk review; analytical and comparative approach; qualitative interviews; quantitative data collection; analysis. The research is organized in five chapters. First chapter provides an overview of the study framework. The second outlines the theory and scientific framework for sustainable and regional development in relation with geography. The third chapter presents the picture of the regional development in Albania, analyzing the disparities and regional development in the light of EU requirements and NUTS division. Chapter 4 continues by analyzing and comparing the regional development of the regions: Tirana – driver for change, Shkodra – the North in Development and Kukes – the “shrinking” region. Chapter 5 presents the conclusions and recommendations. This research comes to the conclusions that if growth in Albania is to be increased and sustained, a regional development policy needs to be established.
Wohin nach der 10. Klasse?
(2013)
Im Lebenslauf ist die Berufswahl eine zentrale Entwicklungsaufgabe. Durch die Institutionalisierung des Lebenslaufes in modernen Gesellschaften wird der Prozess auch institutionell begleitet. Schule organisiert in Kooperation mit der Bundesagentur für Arbeit dazu berufsorientierende Angebote, die u.a. die Entwicklung der Berufswahlreife unterstützen sollen. So werden neben den Eltern auch die Schule und Berufsberatung zu zentralen Vermittlern (Gatekeepern) beim Übergang von der Schule in die Ausbildung. Im Rahmen der Analyse des Berufswahlprozesses ist es wichtig, die Interaktion zwischen „Umwelt und Person“ zu betrachten: Wie gelingt es Jugendlichen, diese Entwicklungsaufgabe anhand personaler und sozialer Ressourcen, sowie im Rahmen gesellschaftlicher Strukturen, zu bewältigen? Diese Fragestellung ist grundsätzlich nicht neu, gewinnt jedoch unter den aktuellen gesellschaftlichen und ökonomischen Übergangsbedingungen eine große Bedeutung. Schulen haben in den letzten Jahren verstärkt begonnen, ihre Berufsorientierung systematisch zu organisieren und weiterzuentwickeln. Die Fülle der neu entwickelten Konzepte und Programme zur Verbesserung der Berufsorientierung steht jedoch in keinem Verhältnis zum Stand der empirischen Forschung. Daher ist die vorliegende Forschungsarbeit von der zentralen Zielstellung geleitet, die empirische Evidenz zur Wirkung schulischer Berufsorientierungsangebote zu erweitern. Im Mittelpunkt der Studie steht die Fragestellung, wie sich der schulische Berufsorientierungsprozess für Schülerinnen und Schüler aller Bildungsgänge für einen verbesserten Übergang in weiterführende Bildungs- und Ausbildungssysteme optimieren lässt. Von Interesse ist dabei, ob und inwieweit schulische Angebote die Entwicklung der Berufswahlreife der Schülerinnen und Schüler beeinflussen, welche Angebote als besonders unterstützend oder weniger sinnvoll beurteilt werden müssen. Diese Fragestellungen wurden auf Basis von schriftlichen Befragungen im Zeitraum von 2008 bis 2010 von Oberschülerinnen und Oberschülern im Landes Brandenburg bearbeitet. Anhand von Querschnitts- und Panelanalysen werden Aussagen über die Wahrnehmung und den Einfluss der verschiedenen schulischen Angebote sowohl für einzelne Jahrgangsstufen als auch im Vergleich zwischen den Jahrgangsstufen getroffen.
Der israelische Autor und Journalist Noah Klieger ist in der deutschsprachigen Forschung zur Holocaustliteratur, in deren Kontext theoretische Konzepte und Interpretationen zahlreicher Autoren (u.a. Ruth Klüger, Primo Levi) dieser Gattung vorliegen, bisher kaum beachtet worden. In der vorliegenden Arbeit steht seine 2010 erschienene Autobiographie „Zwölf Brötchen zum Frühstück“ im Zentrum. Innerhalb der Textanalyse wird der Frage nachgegangen, welche Bedeutung das Schreiben für Klieger hat und inwieweit seine als Reportage angelegte Autobiographie, die den sehr faktenbezogenen und dokumentarischen Stil des Journalisten widerspiegelt, den Rezipienten in der Interpretation lenkt und Authentizität erzeugt. Ausgehend von dieser Fragestellung werden für die Arbeit geführte Interviews mit Noah Klieger (oral history) einbezogen und der Erlebnisbericht „Ich habe den Todesengel überlebt“ von Eva Mozes-Kor, die das Konzept des Erlebnisberichtes mit all seinen Eigenschaften konstant bewahrt, zum Vergleich hinzugezogen. Im Fokus der Arbeit steht die Analyse der Autobiographie Kliegers, wobei auf das Genre Reportage, relevante Stilmittel, zentrale Begrifflichkeiten und Veröffentlichungskontexte sowie auf die Gedächtnistheorie von Maurice Halbwachs eingegangen werden. Abschließend wird die Thematik des Vergebens bei Klieger und Mozes-Kor erörtert. Die Forschungsergebnisse stellen den israelischen Holocaustüberlebenden Noah Klieger als Autor vor und verdeutlichen, dass die innerhalb der Gattung Holocaustliteratur gewählten Darstellungsweisen unterschiedliche Formen von Authentizität evozieren.
Background: The use of psychoactive substances to neuroenhance cognitive performance is prevalent. Neuroenhancement (NE) in everyday life and doping in sport might rest on similar attitudinal representations, and both behaviors can be theoretically modeled by comparable means-to-end relations (substance-performance). A behavioral (not substance-based) definition of NE is proposed, with assumed functionality as its core component. It is empirically tested whether different NE variants (lifestyle drug, prescription drug, and illicit substance) can be regressed to school stressors.
Findings: Participants were 519 students (25.8 +/- 8.4 years old, 73.1% female). Logistic regressions indicate that a modified doping attitude scale can predict all three NE variants. Multiple NE substance abuse was frequent. Overwhelming demands in school were associated with lifestyle and prescription drug NE.
Conclusions: Researchers should be sensitive for probable structural similarities between enhancement in everyday life and sport and systematically explore where findings from one domain can be adapted for the other. Policy makers should be aware that students might misperceive NE as an acceptable means of coping with stress in school, and help to form societal sensitivity for the topic of NE among our younger ones in general.
Background: Neuroenhancement (NE), the use of psychoactive substances in order to enhance a healthy individual's cognitive functioning from a proficient to an even higher level, is prevalent in student populations. According to the strength model of self-control, people fail to self-regulate and fall back on their dominant behavioral response when finite self-control resources are depleted. An experiment was conducted to test the hypothesis that ego-depletion will prevent students who are unfamiliar with NE from trying it.
Findings: 130 undergraduates, who denied having tried NE before (43% female, mean age = 22.76 +/- 4.15 years old), were randomly assigned to either an ego-depletion or a control condition. The dependent variable was taking an "energy-stick" (a legal nutritional supplement, containing low doses of caffeine, taurine and vitamin B), offered as a potential means of enhancing performance on the bogus concentration task that followed. Logistic regression analysis showed that ego-depleted participants were three times less likely to take the substance, OR = 0.37, p = .01.
Conclusion: This experiment found that trying NE for the first time was more likely if an individual's cognitive capacities were not depleted. This means that mental exhaustion is not predictive for NE in students for whom NE is not the dominant response. Trying NE for the first time is therefore more likely to occur as a thoughtful attempt at self-regulation than as an automatic behavioral response in stressful situations. We therefore recommend targeting interventions at this inter-individual difference. Students without previous reinforcing NE experience should be provided with information about the possible negative health outcomes of NE. Reconfiguring structural aspects in the academic environment (e.g. lessening workloads) might help to deter current users.
We present and discuss the results of crystallographic and electron paramagnetic resonance (EPR) spectroscopic analyses of five tetrachloridocuprate(II) complexes to supply a useful tool for the structural characterisation of the [CuCl4]2− moiety in the liquid state, for example in ionic liquids, or in solution. Bis(benzyltriethylammonium)-, bis(trimethylphenylammonium)-, bis(ethyltriphenylphosphonium)-, bis(benzyltriphenylphosphonium)-, and bis(tetraphenylarsonium)tetrachloridocuprate(II) were synthesised and characterised by elemental, IR, EPR and X-ray analyses. The results of the crystallographic analyses show distorted tetrahedral coordination geometry of all [CuCl4]2− anions in the five complexes and prove that all investigated complexes are stabilised by hydrogen bonds of different intensities. Despite the use of sterically demanding ammonium, phosphonium and arsonium cations to obtain the separation of the paramagnetic Cu(II) centres for EPR spectroscopy no hyperfine structure was observed in the EPR spectra but the principal values of the electron Zeeman tensor, g∥ and g⊥, could be determined. With these EPR data and the crystallographic parameters we were able to carry out a correlation study to anticipate the structural situation of tetrachloridocuprates in different physical states. This correlation is in good agreement with DFT calculations.
Deep into the second half of the twentieth century the traditionalist definition of India as a country of villages remained dominant in official political rhetoric as well as cultural production. In the past two decades or so, this ruralist paradigm has been effectively superseded by a metropolitan imaginary in which the modern, globalised megacity increasingly functions as representative of India as a whole. Has the village, then, entirely vanished from the cultural imaginary in contemporary India? Addressing economic practices from upper-class consumerism to working-class family support strategies, this paper attempts to trace how ‘the village’ resurfaces or survives as a cultural reference point in the midst of the urban.
Various 1,6- and 1,8-naphthalenophanes were synthesized by using the Photo-Dehydro-Diels-Alder (PDDA) reaction of bis-ynones. These compounds are easily accessible from omega-(3-iodophenyl)carboxylic acids in three steps. The obtained naphthalenophanes are axially chiral and the activation barrier for the atropisomerization could be determined in some cases by means of dynamic NMR (DNMR) and/or dynamic HPLC (DHPLC) experiments.
Das Konzept der dynamischen Fähigkeiten, das der Forschung zur Privatwirtschaft entspringt, stellt die Frage wie Unternehmen um ihre Ressourcen optimal nutzen zu können, Fähigkeiten entwickeln, durch die sie in der Lage sind, sich stetig zu verbessern. Da sich auch im öffentlichen Sektor die Frage nach einer verbesserten Nutzung und Einsetzung der zur Verfügung stehenden Potentiale stellt, ist es Ziel dieser Arbeit das Konzept der dynamischen Fähigkeiten auf den öffentlichen Sektor und hierein den Untersuchungsgegenstand Museum anzuwenden. Somit werden mithilfe der Durchführung einer explorativen Fallstudie dynamische Fähigkeiten und deren Parameter untersucht und identifiziert. Hierzu wird zuerst das der Arbeit zugrundeliegende theoretische Verständnis des Konzepts dargelegt um darauf aufbauend anhand narrativer Interviews mit Mitarbeitern des Jüdischen Museums Berlin im empirischen Teil der Arbeit das Konstrukt auf den Untersuchungsgegenstand anzuwenden. Durch den somit erlangten detaillierten Einblick können dynamische Fähigkeiten und Faktoren, die sich auf diese auswirken, identifiziert werden.
Der Aufsatz rekonstruiert Daniel Heinsius’ Verachtung der Philologie, wie sie in seinen „Orationes” zum Ausdruck kommt. Die These lautet, dass diese Verachtung in Heinsius’ neuplatonischer Poetik mit ihrer Sakralisierung der Dichtung begründet ist. In seinem Kommentar zur aristotelischen „Poetik“, der „Constitutio tragoediae“, ordnet Heinsius das technische Wissen der aristotelischen „Poetik“ der neuplatonischen Inspirationstheorie unter. Dadurch transformiert er den traditionellen, historisch-philologischen Kommentar in eine neue Form des technischen Handbuchs, wie es die „Constitutio tragoediae“ darstellt.
Verfassungsgerichtsbarkeit in der Russischen Föderation und in der Bundesrepublik Deutschland
(2013)
Der Tagungsband enthält die Referate und Diskussionsbeiträge des in Moskau an der Staatlichen Juristischen Kutafin-Universität am 9. und 10. Oktober 2012 durchgeführten Rundtischgespräches zur Verfassungsgerichtsbarkeit. Behandelt werden ausgewählte rechtshistorische und -politische Fragen sowie aktuelle rechtliche Probleme der Verfassungsgerichtsbarkeit in der Russischen Föderation und der Bundesrepublik Deutschland sowohl aus der Sicht der Rechtspraxis als auch der Wissenschaft: insbesondere die Entwicklung der Verfassungsgerichtsbarkeit in Geschichte und Gegenwart, Status, Rechtsnatur und Aufgaben des Verfassungsgerichts in den Subjekten der Föderation und in den Ländern sowie Verfassungsgericht und Gesetzgebung. Zudem werden Spezialfragen der Verfassungsgerichtsbarkeit erörtert, z.B. die Institution des Bevollmächtigten Vertreters des Präsidenten im Verfassungsgericht in Russland, der Eilrechtsschutz durch das BVerfG und der Rechtsschutz bei überlangen Verfahren vor dem BVerfG in Deutschland.
Multi-messenger constraints and pressure from dark matter annihilation into electron-positron pairs
(2013)
Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E_0 ~ m_dm*c^2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E_0 < 1 GeV. The exact value depends somewhat on the astrophysical parameters, and it changes dramatically with the slope of the dark matter density profile. For very steep slopes, as those expected from adiabatic contraction, the rotation curves of spiral galaxies would be affected on kiloparsec scales for most values of E_0. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope of the halo density profile, as well as the mass and the annihilation cross-section of dark matter particles into electron-positron pairs). In the second part, upper limits on the dark matter annihilation cross-section into electron-positron pairs are obtained by combining observed data at different wavelengths (from Haslam, WMAP, and Fermi all-sky intensity maps) with recent measurements of the electron and positron spectra in the solar neighbourhood by PAMELA, Fermi, and H.E.S.S.. We consider synchrotron emission in the radio and microwave bands, as well as inverse Compton scattering and final-state radiation at gamma-ray energies. For most values of the model parameters, the tightest constraints are imposed by the local positron spectrum and synchrotron emission from the central regions of the Galaxy. According to our results, the annihilation cross-section should not be higher than the canonical value for a thermal relic if the mass of the dark matter candidate is smaller than a few GeV. In addition, we also derive a stringent upper limit on the inner logarithmic slope α of the density profile of the Milky Way dark matter halo (α < 1 if m_dm < 5 GeV, α < 1.3 if m_dm < 100 GeV and α < 1.5 if m_dm < 2 TeV) assuming a dark matter annihilation cross-section into electron-positron pairs (σv) = 3*10^−26 cm^3 s^−1, as predicted for thermal relics from the big bang.
In a recent paper with N. Tarkhanov, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Das Personalmanagement in der öffentlichen Verwaltung steht in Zeiten von Haushaltskürzungen und demographischem Wandel vor der Herausforderung, den gestiegenen Erwartungen an Effizienz und Effektivität mit zunehmend älteren Belegschaften zu begegnen. Als ein wesentlicher Stellhebel für den Erhalt bzw. die Steigerung der Arbeitsfähigkeit der Mitarbeiter gilt in der wissenschaftlichen Debatte die Qualität des Führungsverhaltens der Führungskräfte. Im Fokus dieser Arbeit steht das Konzept altersspezifischer Führung, das sich an den individuellen, altersspezifischen Bedürfnissen des einzelnen Mitarbeiters orientiert. Es wird mittels einer standardisierten Befragung von Führungskräften und deren Mitarbeitern in einer Dienststelle der Bundesagentur für Arbeit untersucht, ob die Ausprägung altersspezifischer Führung Einfluss auf die Qualität der dyadischen Arbeitsbeziehung von Führungskraft und Mitarbeiter (LMX-Qualität) hat. Dafür wird zunächst überprüft, wie altersspezifisch die befragten Führungskräfte führen, und welche Faktoren darauf Einfluss nehmen. Im Ergebnis der Untersuchung zeigt sich, dass ein hochsignifikanter Zusammenhang zwischen altersspezifischer Führung und der LMX-Qualität besteht. Daneben stellt sich heraus, dass die befragten Führungskräfte überwiegend altersspezifisches Führungsverhalten aufweisen, wobei jedoch zu berücksichtigen ist, dass die Ergebnisse auch durch organisationale Vorgaben beeinflusst werden, die den Handlungsspielraum der Führungskräfte begrenzen. Auch wurde für die untersuchte Stichprobe festgestellt, dass Alter und Führungserfahrung die Ausprägung altersspezifischen Führungsverhaltens beeinflussen, während sich für das Geschlecht sowie eine vorurteilsfreie Wahrnehmung älterer Mitarbeiter kein Zusammenhang gezeigt hat.
Gegenstand dieser Arbeit sind die (Selbst-)Darstellungen von Gründer_innen von Nichtregierungsorganisationen (NGOs) im Bereich Kinder- und Frauenrechte in Tamil Nadu, Südindien. Um diese (Selbst-)Darstellungen angemessen analysieren zu können, wird zuerst eine analytische Herangehensweise entworfen, die davon ausgeht, dass bestehende soziologische Konzepte, die in erster Linie in Auseinandersetzung mit einem spezifischen (west-europäischen) Kontext entstanden sind, nicht unhinterfragt auf andere Kontexte übertragen werden können. Das erschwert die Verwendung von Begrifflichkeiten wie „Zivilgesellschaft“, „Entwicklung“ oder auch der scheinbar klaren Dichotomie von Moderne und Tradition. Eisenstadt machte diese Problematik in der von ihm begonnenen Debatte um „Multiple Modernities“ deutlich. In der vorliegenden Arbeit wird an diese Diskussion mit handlungstheoretischen Argumenten angeknüpft, um auch Akteursperspektiven angemessen analysieren zu können. Nachdem der theoretische Rahmen und die methodische Grundlage der Arbeit erläutert wurden, wird Kontextwissen erarbeitet, um die Analyse der Interviews einzubetten. Es werden Diskurse um Kaste und den Status von Frauen sowie Aspekte der aktuellen politischen Situation Tamil Nadus betrachtet. Die (Selbst-)Darstellungen lassen sich dann anhand der im Titel angedeuteten Dreiteilung aufschlüsseln: Die Gründer_innen setzen sich zum ersten mit der eigenen Rolle auseinander. Sie beschreiben sich als „social worker“ und greifen in den Selbstbeschreibungen zum Teil auf populistische Elemente des politischen Umfeldes zurück. Zum zweiten beschreiben sie die eigene Position gegenüber ihren „Zielgruppen“. Dabei wird deutlich, dass die Beziehungen zwischen NGO und „community“ zwischen Partizipation und Paternalismus schwanken. Zum dritten formulieren sie Zielsetzungen in Abgrenzung zu anderen (lokalen) politischen Akteuren: Sie grenzen sich zum Beispiel von einem ihrem Verständnis nach „westlichen“ Begriff von Entwicklung ab und formulieren demgegenüber „eigene“ Ziele. Sie reflektieren über lokale Kooperationen, z.B. mit politischen Persönlichkeiten, Kastenassoziationen, aber auch über Abgrenzungen oder Zusammenstöße, die sich dabei ergeben. Insgesamt wird deutlich, dass die (Selbst-)Darstellungen der Gründer_innen sich spannungsgeladen und ambivalent auf unterschiedliche Diskurse, Ideen und soziale Praktiken beziehen. Sie lassen sich insbesondere nicht in eine Perspektive von „Entwicklung“ einordnen, welche auf der Dichotomie von Moderne und Tradition aufbaut.
The development of self-adaptive software requires the engineering of an adaptation engine that controls and adapts the underlying adaptable software by means of feedback loops. The adaptation engine often describes the adaptation by using runtime models representing relevant aspects of the adaptable software and particular activities such as analysis and planning that operate on these runtime models. To systematically address the interplay between runtime models and adaptation activities in adaptation engines, runtime megamodels have been proposed for self-adaptive software. A runtime megamodel is a specific runtime model whose elements are runtime models and adaptation activities. Thus, a megamodel captures the interplay between multiple models and between models and activities as well as the activation of the activities. In this article, we go one step further and present a modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that considerably eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular for feedback loops. Megamodels are kept explicit and alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops, their runtime models, and adaptation activities explicit at a higher level of abstraction. Moreover, it enables complex solutions where multiple feedback loops interact or even operate on top of each other. Finally, it leverages the co-existence of self-adaptation and off-line adaptation for evolution.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
TRAPID
(2013)
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system. TRAPID is freely available at http://bioinformatics.psb.ugent.be/webtools/trapid/.
Large Central European flood events of the past have demonstrated that flooding can affect several river basins at the same time leading to catastrophic economic and humanitarian losses that can stretch emergency resources beyond planned levels of service. For Germany, the spatial coherence of flooding, the contributing processes and the role of trans-basin floods for a national risk assessment is largely unknown and analysis is limited by a lack of systematic data, information and knowledge on past events. This study investigates the frequency and intensity of trans-basin flood events in Germany. It evaluates the data and information basis on which knowledge about trans-basin floods can be generated in order to improve any future flood risk assessment. In particu-lar, the study assesses whether flood documentations and related reports can provide a valuable data source for understanding trans-basin floods. An adaptive algorithm was developed that systematically captures trans-basin floods using series of mean daily discharge at a large number of sites of even time series length (1952-2002). It identifies the simultaneous occurrence of flood peaks based on the exceedance of an initial threshold of a 10 year flood at one location and consecutively pools all causally related, spatially and temporally lagged peak recordings at the other locations. A weighted cumulative index was developed that accounts for the spatial extent and the individual flood magnitudes within an event and allows quantifying the overall event severity. The parameters of the method were tested in a sensitivity analysis. An intensive study on sources and ways of information dissemination of flood-relevant publications in Germany was conducted. Based on the method of systematic reviews a strategic search approach was developed to identify relevant documentations for each of the 40 strongest trans-basin flood events. A novel framework for assessing the quality of event specific flood reports from a user’s perspective was developed and validated by independent peers. The framework was designed to be generally applicable for any natural hazard type and assesses the quality of a document addressing accessibility as well as representational, contextual, and intrinsic dimensions of quality. The analysis of time-series of mean daily discharge resulted in the identification of 80 trans-basin flood events within the period 1952-2002 in Germany. The set is dominated by events that were recorded in the hydrological winter (64%); 36% occurred during the summer months. The occurrence of floods is characterised by a distinct clustering in time. Dividing the study period into two sub-periods, we find an increase in the percentage of winter events from 58% in the first to 70.5% in the second sub-period. Accordingly, we find a significant increase in the number of extreme trans-basin floods in the second sub-period. A large body of 186 flood relevant documentations was identified. For 87.5% of the 40 strongest trans-basin floods in Germany at least one report has been found and for the most severe floods a substantial amount of documentation could be obtained. 80% of the material can be considered grey literature (i.e. literature not controlled by commercial publishers). The results of the quality assessment show that the majority of flood event specific reports are of a good quality, i.e. they are well enough drafted, largely accurate and objective, and contain a substantial amount of information on the sources, pathways and receptors/consequences of the floods. The inclusion of this information in the process of knowledge building for flood risk assessment is recommended. Both the results as well as the data produced in this study are openly accessible and can be used for further research. The results of this study contribute to an improved spatial risk assessment in Germany. The identified set of trans-basin floods provides the basis for an assessment of the chance that flooding occurs simultaneously at a number of sites. The information obtained from flood event documentation can usefully supplement the analysis of the processes that govern flood risk.
Intensive Forschung hat in den vergangenen Jahrzehnten zu einer sehr detaillierten Charakterisierung des Geschmackssystems der Säugetiere geführt. Dennoch sind mit den bislang eingesetzten Methoden wichtige Fragestellungen unbeantwortet geblieben. Eine dieser Fragen gilt der Unterscheidung von Bitterstoffen. Die Zahl der Substanzen, die für den Menschen bitter schmecken und in Tieren angeborenes Aversionsverhalten auslösen, geht in die Tausende. Diese Substanzen sind sowohl von der chemischen Struktur als auch von ihrer Wirkung auf den Organismus sehr verschieden. Während viele Bitterstoffe potente Gifte darstellen, sind andere in den Mengen, die mit der Nahrung aufgenommen werden, harmlos oder haben sogar positive Effekte auf den Körper. Zwischen diesen Gruppen unterscheiden zu können, wäre für ein Tier von Vorteil. Ein solcher Mechanismus ist jedoch bei Säugetieren nicht bekannt. Das Ziel dieser Arbeit war die Untersuchung der Verarbeitung von Geschmacksinformation in der ersten Station der Geschmacksbahn im Mausgehirn, dem Nucleus tractus solitarii (NTS), mit besonderem Augenmerk auf der Frage nach der Diskriminierung verschiedener Bitterstoffe. Zu diesem Zweck wurde eine neue Untersuchungsmethode für das Geschmackssystem etabliert, die die Nachteile bereits verfügbarer Methoden umgeht und ihre Vorteile kombiniert. Die Arc-catFISH-Methode (cellular compartment analysis of temporal activity by fluorescent in situ hybridization), die die Charakterisierung der Antwort großer Neuronengruppen auf zwei Stimuli erlaubt, wurde zur Untersuchung geschmacksverarbeitender Zellen im NTS angewandt. Im Zuge dieses Projekts wurde erstmals eine stimulusinduzierte Arc-Expression im NTS gezeigt. Die ersten Ergebnisse offenbarten, dass die Arc-Expression im NTS spezifisch nach Stimulation mit Bitterstoffen auftritt und sich die Arc exprimierenden Neurone vornehmlich im gustatorischen Teil des NTS befinden. Dies weist darauf hin, dass Arc-Expression ein Marker für bitterverarbeitende gustatorische Neurone im NTS ist. Nach zweimaliger Stimulation mit Bittersubstanzen konnten überlappende, aber verschiedene Populationen von Neuronen beobachtet werden, die unterschiedlich auf die drei verwendeten Bittersubstanzen Cycloheximid, Chininhydrochlorid und Cucurbitacin I reagierten. Diese Neurone sind vermutlich an der Steuerung von Abwehrreflexen beteiligt und könnten so die Grundlage für divergentes Verhalten gegenüber verschiedenen Bitterstoffen bilden.
Die automatisierte Objektidentifikation stellt ein modernes Werkzeug in den Geoinformationswissenschaften dar (BLASCHKE et al., 2012). Um bei thematischen Kartierungen untereinander vergleichbare Ergebnisse zu erzielen, sollen aus Sicht der Geoinformatik Mittel für die Objektidentifikation eingesetzt werden. Anstelle von Feldarbeit werden deshalb in der vorliegenden Arbeit multispektrale Fernerkundungsdaten als Primärdaten verwendet. Konkrete natürliche Objekte werden GIS-gestützt und automatisiert über große Flächen und Objektdichten aus Primärdaten identifiziert und charakterisiert. Im Rahmen der vorliegenden Arbeit wird eine automatisierte Prozesskette zur Objektidentifikation konzipiert. Es werden neue Ansätze und Konzepte der objektbasierten Identifikation von natürlichen isolierten terrestrischen Oberflächenformen entwickelt und implementiert. Die Prozesskette basiert auf einem Konzept, das auf einem generischen Ansatz für automatisierte Objektidentifikation aufgebaut ist. Die Prozesskette kann anhand charakteristischer quantitativer Parameter angepasst und so umgesetzt werden, womit das Konzept der Objektidentifikation modular und skalierbar wird. Die modulbasierte Architektur ermöglicht den Einsatz sowohl einzelner Module als auch ihrer Kombination und möglicher Erweiterungen. Die eingesetzte Methodik der Objektidentifikation und die daran anschließende Charakteristik der (geo)morphometrischen und morphologischen Parameter wird durch statistische Verfahren gestützt. Diese ermöglichen die Vergleichbarkeit von Objektparametern aus unterschiedlichen Stichproben. Mit Hilfe der Regressionsund Varianzanalyse werden Verhältnisse zwischen Objektparametern untersucht. Es werden funktionale Abhängigkeiten der Parameter analysiert, um die Objekte qualitativ zu beschreiben. Damit ist es möglich, automatisiert berechnete Maße und Indizes der Objekte als quantitative Daten und Informationen zu erfassen und unterschiedliche Stichproben anzuwenden. Im Rahmen dieser Arbeit bilden Thermokarstseen die Grundlage für die Entwicklungen und als Beispiel sowie Datengrundlage für den Aufbau des Algorithmus und die Analyse. Die Geovisualisierung der multivariaten natürlichen Objekte wird für die Entwicklung eines besseren Verständnisses der räumlichen Relationen der Objekte eingesetzt. Kern der Geovisualisierung ist das Verknüpfen von Visualisierungsmethoden mit kartenähnlichen Darstellungen.
Hermetische Offenheit
(2013)
„Ein Buch für Alle und Keinen“ heißt es auf dem Titelblatt von Nietzsches Also sprach Zarathustra. Was wie ein Paradoxon anmutet, verweist jedoch auf den manifesten strukturellen Gehalt eines Werkes, welches der Forschung größtenteils nur mit seinem reichhaltigen narrativen Inhalt entgegentritt. An Nietzsches Also sprach Zarathustra soll in Bezug auf die Klärung des Untertitels untersucht werden, inwiefern die Struktur oder Form des Textes als philosophischer Gehalt verstanden werden kann. Gezeigt werden soll, dass der Zarathustra nicht nur selbstreferentiell den Schlüssel zu seinem Verständnis in sich birgt, sondern auch unabhängig von seinem Inhalt Informationen transportiert, die eigentlich unter einem vermeintlichen methodischen Ausschluss der Möglichkeit des Verstehens, einen philosophischen Sinnhorizont eröffnen.
Es soll gezeigt werden, dass der Zarathustra eine Eigendynamik aufweist, welche im Selbstvollzug ihren philosophischen Gehalt entfaltet. Dabei soll sich zeigen, dass der Text sich in einem Verweisungsgeflecht von in dieser Arbeit herauszuarbeitenden Textebenen in einem Akt der Selbsterfüllung auflöst. Darüber hinaus wird herausgestellt, wie dies geschieht, warum der herauszuarbeitende philosophische Gehalt dieser Dynamik die strukturelle Anforderung an den Text stellt und wie der Untertitel gleichsam als Chiffre, als Leseanweisung, sowie auch als ein den Text manipulierender Katalysator jener skizzierten Dynamik fungiert. Im Laufe dieser Untersuchung wird sich erhärten, was hier als Prämisse angeführt wird: In der Dynamik der Auflösung des Textes sowie in dem im Vollzug des Textes sich ergebenden Ausschluss kommunizierbarer einheitlicher Wahrheiten lösen sich ebenso die vermeintlichen philosophischen Schwergewichte des Zarathustra, wie der Topos des Übermenschen, der Willen zur Macht oder die Ewige Wiederkunft auf, sodass von Lehren oder gar Philosophemen im Also sprach Zarathustra nicht die Rede sein kann.
Diese Arbeit wird zeigen, dass der Text weniger inhaltlich als vielmehr im Vollzug seiner Struktur von philosophischer Bedeutung ist und gar als das folgerichtige Paradigma einer Philosophie angesehen werden kann, welche die Dichotomie des binären Diskurses von ‚wahr‘ und ‚falsch‘ der abendländischen Epistemologie kritisch überwunden zu haben scheint.
Interactive rendering techniques for focus+context visualization of 3D geovirtual environments
(2013)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments.
Hydrothermal carbonisation
(2013)
The world’s appetite for energy is producing growing quantities of CO2, a pollutant that contributes to the warming of the planet and which currently cannot be removed or stored in any significant way. Other natural reserves are also being devoured at alarming rates and current assessments suggest that we will need to identify alternative sources in the near future. With the aid of materials chemistry it should be possible to create a world in which energy use needs not be limited and where usable energy can be produced and stored wherever it is needed, where we can minimize and remediate emissions as new consumer products are created, whilst healing the planet and preventing further disruptive and harmful depletion of valuable mineral assets. In achieving these aims, the creation of new and very importantly greener industries and new sustainable pathways are crucial. In all of the aforementioned applications, new materials based on carbon, ideally produced via inexpensive, low energy consumption methods, using renewable resources as precursors, with flexible morphologies, pore structures and functionalities, are increasingly viewed as ideal candidates to fulfill these goals. The resulting materials should be a feasible solution for the efficient storage of energy and gases. At the end of life, such materials ideally must act to improve soil quality and to act as potential CO2 storage sinks. This is exactly the subject of this habilitation thesis: an alternative technology to produce carbon materials from biomass in water using low carbonisation temperatures and self-generated pressures. This technology is called hydrothermal carbonisation. It has been developed during the past five years by a group of young and talented researchers working under the supervision of Dr. Titirici at the Max-Planck Institute of Colloids and Interfaces and it is now a well-recognised methodology to produce carbon materials with important application in our daily lives. These applications include electrodes for portable electronic devices, filters for water purification, catalysts for the production of important chemicals as well as drug delivery systems and sensors.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
A polymer analogous reaction for the formation of imidazolium and NHC based porous polymer networks
(2013)
A polymer analogous reaction was carried out to generate a porous polymeric network with N-heterocyclic carbenes (NHC) in the polymer backbone. Using a stepwise approach, first a polyimine network is formed by polymerization of the tetrafunctional amine tetrakis(4-aminophenyl)methane. This polyimine network is converted in the second step into polyimidazolium chloride and finally to a polyNHC network. Furthermore a porous Cu(II)-coordinated polyNHC network can be generated. Supercritical drying generates polymer networks with high permanent surface areas and porosities which can be applied for different catalytic reactions. The catalytic properties were demonstrated for example in the activation of CO2 or in the deoxygenation of sulfoxides to the corresponding sulfides.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
Korrelation zwischen der genetischen und der funktionellen Diversität humaner Bitterrezeptoren
(2013)
Der Mensch besitzt ~25 funktionelle Bitterrezeptoren (TAS2R), die für die Wahrnehmung potenziell toxischer Substanzen in der Nahrung verantwortlich sind. Aufgrund der großen genetischen Variabilität der TAS2R-Gene könnte es eine Vielzahl funktionell unterschiedlicher TAS2R-Haplotypen geben, die zu Unterschieden der Bitterwahrnehmung führen. Dies konnte bereits in funktionellen Analysen und sensorischen Studien für einzelne Bitterrezeptoren gezeigt werden. In dieser Arbeit wurden die häufigsten Haplotypen aller 25 Bitterrezeptoren verschiedener Ethnien funktionell charakterisiert. Das Ziel war eine umfassende Aussage über die funktionelle Diversität der TAS2Rs, die die molekulare Grundlage für individuelle Bitterwahrnehmung bildet, treffen zu können. Fehlende Varianten wurden aus genomischer DNA kloniert oder durch gezielte Mutagenese bereits vorhandener TAS2R-Konstrukte generiert. Die funktionelle Analyse erfolgte mittels Expression der TAS2R-Haplotypen in HEK293TG16gust44 Zellen und anschließenden Calcium-Imaging-Experimenten mit zwei bekannten Agonisten. Die Haplotypen der fünf orphanen TAS2Rs wurden mit über hundert Bitterstoffen stimuliert. Durch die gelungene Deorphanisierung des TAS2R41 in dieser Arbeit, wurden für die 21 aktivierbaren TAS2Rs 36 funktionell-unterschiedliche Haplotypen identifiziert. Die tatsächliche funktionelle Vielfalt blieb jedoch deutlich hinter der genetischen Variabilität der TAS2Rs zurück. Neun Bitterrezeptoren wiesen funktionell homogene Haplotypen auf oder besaßen nur eine weltweit vorherrschende Variante. Funktionell heterogene Haplotypen wurden für zwölf TAS2Rs identifiziert. Inaktive Varianten der Rezeptoren TAS2R9, TAS2R38 und TAS2R46 sollten die Wahrnehmung von Bitterstoffen wie Ofloxacin, Cnicin, Hydrocortison, Limonin, Parthenolid oder Strychnin beeinflussen. Unterschiedlich sensitive Varianten, besonders der Rezeptoren TAS2R47 und TAS2R49, sollten für Agonisten wie Absinthin, Amarogentin oder Cromolyn ebenfalls zu phänotypischen Unterschieden führen. Wie für den TAS2R16 bereits gezeigt, traten Haplotypen des funktionell heterogenen TAS2R7 und TAS2R41 ethnien-spezifisch auf, was auf lokale Anpassung und verschiedene Phänotypen hinweisen könnte. Weiterführend muss nun eine Analyse der funktionell-variablen TAS2Rs in sensorischen Tests erfolgen, um ihre phänotypische Relevanz zu prüfen. Die Analyse der funktionsmodulierenden Aminosäurepositionen, z.Bsp. des TAS2R44, TAS2R47 oder TAS2R49, könnte weiterführend zum besseren Verständnis der Rezeptor-Ligand- und Rezeptor-G-Protein-Interaktion beitragen.
E-Learning Symposium 2012
(2013)
Dieser Tagungsband beinhaltet die auf dem E-Learning Symposium 2012 an der Universität Potsdam vorgestellten Beiträge zu aktuellen Anwendungen, innovativen Prozesse und neuesten Ergebnissen im Themenbereich E-Learning. Lehrende, E-Learning-Praktiker und -Entscheider tauschten ihr Wissen über etablierte und geplante Konzepte im Zusammenhang mit dem Student-Life-Cycle aus. Der Schwerpunkt lag hierbei auf der unmittelbaren Unterstützung von Lehr- und Lernprozessen, auf Präsentation, Aktivierung und Kooperation durch Verwendung von neuen und etablierten Technologien.
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
Portal Wissen = Grenzen
(2013)
Die neue Ausgabe des Potsdamer Forschungsmagazins widmet sich dem Thema „Grenzen“ aus unterschiedlichsten Perspektiven.
Als Sprachwissenschaftlerin denke ich bei diesem Stichwort an sprachliche Grenzen und die Wirkungen, die sich aus dem Kontakt von zwei Sprachen an einer Sprachgrenze ergeben können. Vielfältig sind die Belege für das sogenannte Code-Switching, dem Wechsel von einer Sprache in die andere in einer bestimmten Äußerungssituation. Die Motive für einen solchen Sprachwechsel können ganz unterschiedlicher Natur sein: So lässt sich Code-Switching einerseits auf eine eingeschränkte sprachliche Kompetenz zurückführen, wenn beispielsweise einer Sprecherin ein bestimmtes Wort in der Zweitsprache fehlt, andererseits kann das Code-Switching prestigebedingt sein, wenn ein Sprecher durch einen sprachlichen Wechsel seine Zugehörigkeit zu einer bestimmten gesellschaftlichen Gruppe demonstrieren möchte.
Wenn Code-Switching nicht nur punktuell stattfindet, sondern ganze Sprach- gemeinschaften über einen längeren Zeitraum erfasst, kann es zu weitreichenden Veränderungen der betroffenen Sprachen kommen. Welche Sprache „gibt“ und welche Sprache „nimmt“, hängt von sprachexternen Faktoren ab. So ist es ein Leichtes vorherzusagen, dass die deutschen Varietäten der Sprachinseln in Süd- und Osteuropa wie auch in Nord- und Südamerika wohl zunehmend Material und Muster aus den sie umgebenden Sprachen aufnehmen und letztendlich in ihnen aufgehen werden, wenn sie nicht durch politischen Willen konserviert werden. Das Ausmaß von Sprachkontakt ist mit der zunehmenden räumlichen Mobilität moderner Gesellschaften stark angestiegen, und lässt sich sicher nicht auf den aktuell immer wieder thematisierten Sprachkontakt des Deutschen mit dem Englischen reduzieren. Historisch gesehen ist das Deutsche vor allem durch die romanischen Sprachen stark geprägt worden – in Potsdam denkt man hier unwillkürlich an den starken Einfluss des Französischen im 18. Jahrhundert.
Die Überwindung sprachlicher Grenzen zeigt sich auch im Alltag einer international ausgerichteten Forschungsuniversität: So hat im März dieses Jahres in Potsdam die Jahrestagung der deutschen Gesellschaft für Sprachwissenschaft mit über 500 Teilnehmern stattgefunden. Lingua Franca der Tagung war Englisch, was den Anteil der internationalen Teilnehmer gegenüber früheren Jahrestagungen nochmals vergrößert hat.
Zahlreiche andere Zugänge zum Thema Grenzen bieten die Beiträger dieser Ausgabe des Forschungsmagazins: Auf den Spuren von „Grenzvermessungen“ bewegen sich die Texte zum Australienforscher Ludwig Leichhardt oder zur Energiebilanz im Spitzensport. „Grenzgänger“ stehen im Fokus der Beiträge über eine Forschergruppe zur Literatur der kolonialen Karibik oder die tief in die Erde reichenden Forschungen eines italienischen Geologen. Auf der Suche nach dem „Grenzenlosen“ folgen die Autoren den Wissenschaftlern etwa zur Frage „Why love hurts?“ oder hinein in eine Geschichte des Musikhörens. Den umgekehrten Weg, nämlich „Grenzziehungen“, beobachtet „Portal Wissen“ in der Arbeit des Potsdamer MenschenRechtsZentrums oder den Auswertungen des Nationalen Dopingpräventionsplans. Belege für erfolgreiche „Grenzüberschreitungen“ liefern schließlich Blicke ins „Taschentuchlabor“ oder die digitale Edition mittelalterlicher Prosaepen, um nur einige Beiträge aus diesem Heft herauszugreifen.
Ich wünsche Ihnen bei der Lektüre anregende Grenzerfahrungen mit vielen Impulsen für eigene fachliche Grenzüberschreitungen.
Prof. Dr. Ulrike Demske
Professorin für Geschichte und Variation der Deutschen Sprache
Vizepräsidentin für Internationales, Alumni und Fundraising
Portal Wissen = Schichten
(2013)
Die neue Ausgabe unseres Potsdamer Forschungsmagazins widmet sich ganz und gar und auf sehr unterschiedliche Weise dem Thema „Schichten“.
Als Geowissenschaftler begegnen mir Schichten häufig: Boden-, Sedimentoder Gesteinsschichten – sie sind das Zeugnis lang anhaltender und immer wiederkehrender Erosionsund Ablagerungsprozesse, wie sie schon in der frühen Erdgeschichte stattfanden. Gebirge werden beispielsweise durch Wasser, Eis und Wind erodiert. Die Erosionsprodukte bilden vielleicht irgendwann auf dem Meeresgrund als Ablagerungshorizont eine neue Schicht. Umgekehrt führen Deformationsprozesse als Folge von tektonischen Plattenbewegungen dazu, dass Gebirge entstehen und der Mensch versteinerte Meeresbewohner in verfalteten Sedimentschichten im Hochgebirge findet – Beziehungen, wie sie bereits von Ibn Sina und später von Charles Darwin bei seiner Andenüberquerung beschrieben wurden. Aber auch die Landschaft, die wir bei einem Blick aus dem Fenster wahrnehmen, ist nichts anderes als das Produkt verschiedener Überlagerungen von Prozessen Liebe Leserinnen und Leser, in der Vergangenheit und heute. Langsam ablaufende Prozesse oder seltener stattfindende Extremereignisse wie Fluten, Erdbeben oder Bergstürze – einzelne Merkmale werden dabei ausgelöscht, andere treten zutage. Ähnlich einem Palimpsest – einem Stück Pergament, das die Mönche im Mittelalter immer wieder abgeschabt und neu überschrieben haben. Die Analyse von Gesteins- und Bodenschichten gleicht der Arbeit eines Detektivs. Geophysikalische Tiefensondierungen mit Schall- und Radarwellen, die genaue Vermessung von Erdbebenherden oder Tiefbohrungen bringen uns verdeckte Erdschichten näher. Fossilienfunde und radiometrische Datierungen verraten das Alter einer Schicht. Mithilfe dünner Ascheschichten können wir nachweisen, wann verheerende Vulkanausbrüche Umweltbedingungen beeinflusst haben.
Böden, die Epidermis unseres Planeten, spiegeln die Eigenschaften der darunterliegenden Gesteinsschichten, der Vegetationsbedeckung oder den Einfluss des Klimas wider. Die Form, Sortierung und Oberflächenbeschaffenheit von Sandkörnern lassen uns erkennen, ob Wind oder Wasser für ihren Transport gesorgt haben. So wissen wir, dass Norddeutschland vor über 260 Millionen Jahren eine Wüstenlandschaft war, in der der Wind mächtige Dünen wandern ließ. Die mineralogische Untersuchung damit verbundener Schichten verrät, ob das Klima trocken oder feucht war. So dechiffrieren wir Hinweise auf vergangene Prozesse, die unter der Erdoberfläche versteckt sind oder – wie etwa in Gebirgen – offen zutage treten.
Auf den kommenden Seiten laden wir Sie ein, Potsdamer Wissenschaftler an die Orte ihrer Forschung zu begleiten: Im Tien Shan-Gebirge spüren sie längst vergangene Erdbeben auf, in Tiefseesedimenten entdecken sie uralte Lebensformen und im Weltall erforschen sie gar Schichten, die uns etwas über die Entstehung von Planeten verraten. Die Wissenschaftler der Universität Potsdam beschränken sich allerdings nicht auf die Schichtabfolgen der festen Erde. „Portal Wissen“ blickt auch jenen Wissenschaftlern über die Schulter, die sich mit „Bildungsschichten“ oder „Gesellschaftsschichten“ befassen. So erklären Forscher, wie der gesellschaftliche Auftrag der Inklusion in der Lehre umgesetzt wird oder wie Kreuzberger Schüler zusammen mit Potsdamer Studierenden Sprache im urbanen Raum erforschen.
So unterschiedlich sie sind, eines ist allen diesen „Schichten“ gemeinsam: Ihre Struktur und Form sind Zeugnis sich immer wieder verändernder Rahmenbedingungen. Auch die Gegenwart wird Spuren und Schichten hinterlassen, die zukünftige Erdwissenschaftler vermessen und untersuchen werden. Schon jetzt spricht man vom Anthropozän, einem vom Menschen dominierten geologischen Zeitabschnitt, charakterisiert durch tiefgreifende Änderungen in den Erosions- und Sedimentationsraten und der Verdrängung natürlicher Lebensräume.
Ich wünsche Ihnen, dass Sie in diesem Heft spannende und anregende Geschichten entdecken. Denn es lohnt sich, einen Blick unter die Oberfläche zu werfen.
Prof. Manfred Strecker, PhD.
Professor für Allgemeine Geologie
Portal Wissen = Layers
(2013)
The latest edition of our Potsdam Research Magazine “Portal Wissen” addresses the topic “Layers” in many different ways. Geoscientists often deal with layers: layers of soil, sediment, or rock are the evidence of repeated and long-lasting processes of erosion and sedimentation that took place in the early history of the earth. For instance, mountains are eroded by water, ice and wind. The sand that results from that erosion might eventually form a new layer on the ocean floor known as a sediment horizon. After tens of millions of years, tectonic plate movements can deform the ocean floor, pushing it upwards as mountains are created, bringing the layers of sand from former mountain chains together with fossilized sea dwellers into the realm of climbers and mountaineers – a fundamental cycle within the Earth system that was succinctly described by Ibn Sina nearly 1000 years ago, and later by Charles Darwin when he was crossing the Andes.
The landscape around us overlays the products of recent processes with those from the past. Slow processes or extreme events that happen very rarely – like floods, earthquakes or rockslides – wipe out certain characteristics, while others remain on the surface. In this sense, the landscape is like a palimpsest – a piece of parchment that monks in the Middle Ages scraped clean again and again to write something new. Analysing rock layers and soil is similar to the work of a detective. Geophysical deep sounding with sound and radar waves, precise measurements of motions related to earthquakes, and deep boreholes each provide a glimpse of the characteristics of what lies beneath us, giving us a better understanding of spatial distribution of the various layers. Fossils can tell us the age of a layer of sediment, while radiometric isotopes in minerals reveal how quickly a rock moved from deep within the Earth up to the surface, perhaps during the process of mountain building. Thin layers of ash tell us when there was a devastating volcanic eruption that influenced environmental conditions. The shape, gradation, and surface conditions of sand grains reflect whether wind or water was responsible for their transport. We know, for instance, that northern Germany was a desert landscape more than 260 million years ago. At that time, the wind made huge dunes migrate across the region. Over time, climate and vegetation slowly alter the physical and chemical characteristics of sand and rock at the surface, turning them into soil, the epidermis of our planet. Mineralogical analyses of layers of the soil layer tell us whether the climate was dry or wet. These kinds of observations allow us to reconstruct links between our climate system and processes that have taken place on the Earth’s surface, as well as those processes that originate at much deeper levels. The clues we use might be hidden under the surface of the earth or clearly visible on the surface, like in the mountains, or even in freshly cut rock alongside roads.
On the following pages, we invite you to accompany scientists from Potsdam into their world of research. They track hidden traces of longgone earthquakes in the Tien Shan Mountains; they discover ancient forms of life in deep-sea sediments. They even examine layers in outer space that can tell us something about the formation of planets. “Portal Wissen” not only presents scientists of the University of Potsdam who deal with the sequence of layers formed by solid rock, but also those scientists who deal with levels of education or social strata. Research scientists explain how to implement the social mission of inclusion in teaching, and how pupils from the Berlin district Kreuzberg examine language in urban neighbourhoods together with students from the University of Potsdam.
Although these types of “layers” are very different, they all have something in common. Their structure and profile are evidence of continuously changing conditions. The present will leave traces and layers that future geoscientists will measure and examine. We already speak of the Anthropocene, a geological era dominated by humans, which is characterized by far-reaching changes in erosion and sedimentation rates, and the displacement of natural habitats. I hope that you will discover exciting and inspiring stories in this edition. And remember – it is always worth having a look beneath the surface.
Prof. Manfred Strecker, PHD
Professor of Geology
Soziale Netzwerkanalyse für HumangeographInnen : Einführung in UCINET und NetDraw in fünf Schritten
(2013)
In den Sozialwissenschaften profiliert sich derzeit eine disziplinübergreifende Netzwerkforschung, die sich durch eine relationale Theorieperspektive auszeichnet. Die empirische Forschung greift dabei vermehrt auf das Methodenrepertoire der Social Network Analysis (SNA) zurück. Für die Humangeographie kann der soziologische Blick der Netzwerkforschung auf unterschiedliche Weise um „Raumbezüge“ ergänzt und in verschiedenen Forschungsfeldern zur Anwendung gebracht werden. Trotz ihres vielfältigen Potenzials nimmt die SNA in der geographischen Methodenausbildung in Deutschland bisher einen eher untergeordneten Stellenwert ein. Dieses Buch richtet sich an Studentinnen und Studenten, die sich für (humangeographische) empirische Netzwerkforschung interessieren und einen einfachen Einstieg suchen. Es führt verständlich in zentrale Fachbegriffe der SNA und die grundlegenden Funktionen der Analyse-Software UCINet ein. Von der Dateneingabe und -aufbereitung, über die Visualisierung bis hin zu netzwerkanalytischen Berechnungen werden die ersten Schritte vorgestellt und nachvollziehbar erläutert.
Introduction: Intestinal bacteria influence gut morphology by affecting epithelial cell proliferation, development of the lamina propria, villus length and crypt depth [1]. Gut microbiota-derived factors have been proposed to also play a role in the development of a 30 % longer intestine, that is characteristic of PRM/Alf mice compared to other mouse strains [2, 3]. Polyamines and SCFAs produced by gut bacteria are important growth factors, which possibly influence mucosal morphology, in particular villus length and crypt depth and play a role in gut lengthening in the PRM/Alf mouse. However, experimental evidence is lacking. Aim: The objective of this work was to clarify the role of bacterially-produced polyamines on crypt depth, mucosa thickness and epithelial cell proliferation. For this purpose, C3H mice associated with a simplified human microbiota (SIHUMI) were compared with mice colonized with SIHUMI complemented by the polyamine-producing Fusobacterium varium (SIHUMI + Fv). In addition, the microbial impact on gut lengthening in PRM/Alf mice was characterized and the contribution of SCFAs and polyamines to this phenotype was examined. Results: SIHUMI + Fv mice exhibited an up to 1.7 fold higher intestinal polyamine concentration compared to SIHUMI mice, which was mainly due to increased putrescine concentrations. However, no differences were observed in crypt depth, mucosa thickness and epithelial proliferation. In PRM/Alf mice, the intestine of conventional mice was 8.5 % longer compared to germfree mice. In contrast, intestinal lengths of C3H mice were similar, independent of the colonization status. The comparison of PRM/Alf and C3H mice, both associated with SIHUMI + Fv, demonstrated that PRM/Alf mice had a 35.9 % longer intestine than C3H mice. However, intestinal SCFA and polyamine concentrations of PRM/Alf mice were similar or even lower, except N acetylcadaverine, which was 3.1-fold higher in PRM/Alf mice. When germfree PRM/Alf mice were associated with a complex PRM/Alf microbiota, the intestine was one quarter longer compared to PRM/Alf mice colonized with a C3H microbiota. This gut elongation correlated with levels of the polyamine N acetylspermine. Conclusion: The intestinal microbiota is able to influence intestinal length dependent on microbial composition and on the mouse genotype. Although SCFAs do not contribute to gut elongation, an influence of the polyamines N acetylcadaverine and N acetylspermine is conceivable. In addition, the study clearly demonstrated that bacterial putrescine does not influence gut morphology in C3H mice.
In diesem Band „Essays zur Didaktik der Geographie“ werden in 8 Beiträgen ausgewählte Methoden für den Geographieunterricht behandelt. Praxiserfahrenen Geographiedidaktiker(inne)n und Geographielehrer(inne)n haben die Themen Croquis/Chorèmes und Schemata, Einstiege, Planspiele, Rollenspiele, Scaffolding, Schulbücher, Spiele und Umweltbildung in pointierter Form aufbereitet. Die Inhalte dieses Buches richten sich an Studierende, Referendarinnen und Referendare sowie Lehrer(inn)en, die sich kompakt über einzelne Themen oder Stichworte informieren wollen. Die acht Beiträge ergänzen die 65 Beiträge im „Metzler Handbuch 2.0 Geographieunterricht“. Sie sind durch wechselseitige Querverweise systematisch miteinander verbunden.
Kultur gibt den Menschen eine Orientierung. Sie machen darin ganz spezifische Erfahrungen. Hieraus entwickeln sich auch motivationale Orientierungen. Dadurch werden andere Erfahrungen gemacht, die Sportler können andere Motivation und Volition entwickeln. Dabei sind mehr kollektivistische Kulturen eher vermeidungs-motiviert und mehr individualistische Kulturen mehr erfolgsorientiert. Beim Kollektivismus erscheint die Leistungsmotivation eher unter einem sozialen Aspekt, nämlich die Auseinandersetzung mit einem Gütemaßstab, der eher von außen vorgegeben wird und weniger einem ausschließlich eigenen Maßstab. Ägypten erweist sich im Vergleich zu Deutschland als eine eher kollektivistisch geprägte Kultur. Daraus ergeben sich folgende Unterschiede: Einen signifikanten Unterschied zwischen deutschen und ägyptischen Ringern gibt es in der Wettkampforientierung und bei der Sieg- bzw. Gewinn-Orientierung. Die ägyptischen Ringer habe eine höhere Ausprägung als die Deutschen. Sie weisen auch eine etwas höhere Zielorientierung auf als die Deutschen. Entgegen den Erwartungen zeigte sich, dass es keine signifikanten Unterschiede zwischen den ägyptischen und deutschen Ringern gibt in der Variable: Sieg- bzw. Gewinn-Orientierung. Die Furcht vor Misserfolg sowie auch die Hoffnung auf Erfolg liegen höher bei den Ägyptern als bei den Deutschen. Bezogen auf die Modi der Handlungskontrolle verfügen die Deutschen Ringer über eine höher Ausprägung auf allen drei Komponenten. Sie haben eine höhere Handlungsorientierung nach Misserfolg, eine höhere Handlungsplanung sowie eine höhere Handlungstätigkeitsausführung. Diese kulturell kontrastive Studie über die psychologischen Aspekte, im Bereich der Leistungsmotivation und der Handlungskontrolle, kann für die Sportart Ringen sehr nützlich werden, da sie sehr wichtig ist beim Erkennen der sportlichen Überlegenheits- und Schwächemerkmale. Sie wiederspiegelt auch die Hochstimmung in den entwickelten Staaten oder die Misere in den anderen Staaten. Aus den interkulturellen Unterschieden in der Motivation und Volition können somit verschiedene Maßnahmen zu sportpsychologischen Interventionen entwickelt werden. Es sollte unbedingt darauf wert gelegt werden, dass die kulturell bedingten Unterschiede im Trainingsalltag beachtet werden, bei Teams, die aus Personen aus unterschiedlichen Kulturkreisen stammen.
The surface heat flow (qs) is paramount for modeling the thermal structure of the lithosphere. Changes in the qs over a distinct lithospheric unit are normally directly reflecting changes in the crustal composition and therewith the radiogenic heat budget (e.g., Rudnick et al., 1998; Förster and Förster, 2000; Mareschal and Jaupart, 2004; Perry et al., 2006; Hasterok and Chapman, 2011, and references therein) or, less usual, changes in the mantle heat flow (e.g., Pollack and Chapman, 1977). Knowledge of this physical property is therefore of great interest for both academic research and the energy industry. The present study focuses on the qs of central and southern Israel as part of the Sinai Microplate (SM). Having formed during Oligocene to Miocene rifting and break-up of the African and Arabian plates, the SM is characterized by a young and complex tectonic history. Resulting from the time thermal diffusion needs to pass through the lithosphere, on the order of several tens-of-millions of years (e.g., Fowler, 1990); qs-values of the area reflect conditions of pre-Oligocene times. The thermal structure of the lithosphere beneath the SM in general, and south-central Israel in particular, has remained poorly understood. To address this problem, the two parameters needed for the qs determination were investigated. Temperature measurements were made at ten pre-existing oil and water exploration wells, and the thermal conductivity of 240 drill core and outcrop samples was measured in the lab. The thermal conductivity is the sensitive parameter in this determination. Lab measurements were performed on both, dry and water-saturated samples, which is labor- and time-consuming. Another possibility is the measurement of thermal conductivity in dry state and the conversion to a saturated value by using mean model approaches. The availability of a voluminous and diverse dataset of thermal conductivity values in this study allowed (1) in connection with the temperature gradient to calculate new reliable qs values and to use them to model the thermal pattern of the crust in south-central Israel, prior to young tectonic events, and (2) in connection with comparable datasets, controlling the quality of different mean model approaches for indirect determination of bulk thermal conductivity (BTC) of rocks. The reliability of numerically derived BTC values appears to vary between different mean models, and is also strongly dependent upon sample lithology. Yet, correction algorithms may significantly reduce the mismatch between measured and calculated conductivity values based on the different mean models. Furthermore, the dataset allowed the derivation of lithotype-specific conversion equations to calculate the water-saturated BTC directly from data of dry-measured BTC and porosity (e.g., well log derived porosity) with no use of any mean model and thus provide a suitable tool for fast analysis of large datasets. The results of the study indicate that the qs in the study area is significantly higher than previously assumed. The new presented qs values range between 50 and 62 mW m⁻². A weak trend of decreasing heat flow can be identified from the east to the west (55-50 mW m⁻²), and an increase from the Dead Sea Basin to the south (55-62 mW m⁻²). The observed range can be explained by variation in the composition (heat production) of the upper crust, accompanied by more systematic spatial changes in its thickness. The new qs data then can be used, in conjunction with petrophysical data and information on the structure and composition of the lithosphere, to adjust a model of the pre-Oligocene thermal state of the crust in south-central Israel. The 2-D steady-state temperature model was calculated along an E-W traverse based on the DESIRE seismic profile (Mechie et al., 2009). The model comprises the entire lithosphere down to the lithosphere–asthenosphere boundary (LAB) involving the most recent knowledge of the lithosphere in pre-Oligocene time, i.e., prior to the onset of rifting and plume-related lithospheric thermal perturbations. The adjustment of modeled and measured qs allows conclusions about the pre-Oligocene LAB-depth. After the best fitting the most likely depth is 150 km which is consistent with estimations made in comparable regions of the Arabian Shield. It therefore comprises the first ever modelled pre-Oligocene LAB depth, and provides important clues on the thermal state of lithosphere before rifting. This, in turn, is vital for a better understanding of the (thermo)-dynamic processes associated with lithosphere extension and continental break-up.
With the present theoretical study of the photochemical switching of E-methylfurylfulgide we contribute an important step towards the understanding of the photochemical processes in furylfulgide-related molecules. We have carried out large-scale, full-dimensional direct semiempirical configuration-interaction surface-hopping dynamics of the photoinduced ring-closure reaction. Simulated static and dynamical UV/Vis-spectra show good agreement with experimental data of the same molecule. By a careful investigation of our dynamical data, we were able to identify marked differences to the dynamics of the previously studied E-isopropylfurylfulgide. With our simulations we can not only reproduce the experimentally observed quantum yield differences qualitatively but we can also pinpoint two reasons for them: kinematics and pre-orientation. With our analysis, we thus offer straightforward molecular explanations for the high sensitivity of the photodynamics towards seemingly minor changes in molecular constitution. Beyond the realm of furylfulgides, these insights provide additional guidance to the rational design of photochemically switchable molecules.
Die Modelle der räumlichen Preistheorie sind über einen langen Zeitraum entwickelt worden und mit bekannten Namen wie Wilhelm Launhardt und August Lösch verbunden. Diese Ansätze versuchen der räumlichen Dimension des Preisbildungsprozesses auf Märkten in partialanalytischen Modellen Rechnung zu tragen. Im Buch werden Monopole, monopolistische Konkurrenz und internationaler Handel diskutiert. Dabei hat der Leser die Möglichkeit, sich über die Standardmodell hinaus mit komplexeren Strukturen vertraut zu machen.
Background
Natural accessions of Arabidopsis thaliana are a well-known system to measure levels of intraspecific genetic variation. Leaf starch content correlates negatively with biomass. Starch is synthesized by the coordinated action of many (iso)enzymes. Quantitatively dominant is the repetitive transfer of glucosyl residues to the non-reducing ends of α-glucans as mediated by starch synthases. In the genome of A. thaliana, there are five classes of starch synthases, designated as soluble starch synthases (SSI, SSII, SSIII, and SSIV) and granule-bound synthase (GBSS). Each class is represented by a single gene. The five genes are homologous in functional domains due to their common origin, but have evolved individual features as well. Here, we analyze the extent of genetic variation in these fundamental protein classes as well as possible functional implications on transcript and protein levels.
Findings
Intraspecific sequence variation of the five starch synthases was determined by sequencing the entire loci including promoter regions from 30 worldwide distributed accessions of A. thaliana. In all genes, a considerable number of nucleotide polymorphisms was observed, both in non-coding and coding regions, and several amino acid substitutions were identified in functional domains. Furthermore, promoters possess numerous polymorphisms in potentially regulatory cis-acting regions. By realtime experiments performed with selected accessions, we demonstrate that DNA sequence divergence correlates with significant differences in transcript levels.
Conclusions
Except for AtSSII, all starch synthase classes clustered into two or three groups of haplotypes, respectively. Significant difference in transcript levels among haplotype clusters in AtSSIV provides evidence for cis-regulation. By contrast, no such correlation was found for AtSSI, AtSSII, AtSSIII, and AtGBSS, suggesting trans-regulation. The expression data presented here point to a regulation by common trans-regulatory transcription factors which ensures a coordinated action of the products of these four genes during starch granule biosynthesis. The apparent cis-regulation of AtSSIV might be related to its role in the initiation of de novo biosynthesis of granules.
Cost models are an essential part of database systems, as they are the basis of query performance optimization. Based on predictions made by cost models, the fastest query execution plan can be chosen and executed or algorithms can be tuned and optimised. In-memory databases shifts the focus from disk to main memory accesses and CPU costs, compared to disk based systems where input and output costs dominate the overall costs and other processing costs are often neglected. However, modelling memory accesses is fundamentally different and common models do not apply anymore. This work presents a detailed parameter evaluation for the plan operators scan with equality selection, scan with range selection, positional lookup and insert in in-memory column stores. Based on this evaluation, a cost model based on cache misses for estimating the runtime of the considered plan operators using different data structures is developed. Considered are uncompressed columns, bit compressed and dictionary encoded columns with sorted and unsorted dictionaries. Furthermore, tree indices on the columns and dictionaries are discussed. Finally, partitioned columns consisting of one partition with a sorted and one with an unsorted dictionary are investigated. New values are inserted in the unsorted dictionary partition and moved periodically by a merge process to the sorted partition. An efficient attribute merge algorithm is described, supporting the update performance required to run enterprise applications on read-optimised databases. Further, a memory traffic based cost model for the merge process is provided.
Under standard conditions the cross metathesis of allyl alcohols and methyl acrylate is accompanied by the formation of ketones, resulting from uncontrolled and undesired double bond isomerization. By conducting the CM in the presence of phenol, the catalyst loading and the reaction time required for quantiative conversion can be reduced, and isomerization can be suppressed. On the other hand, consecutive isomerization can be deliberately promoted by evaporating excess methyl acrylate after completing cross metathesis and by adding a base or silane as chemical triggers.
4-Phenol diazonium salts undergo Pd-catalyzed Heck reactions with various styrenes to 4’-hydroxy stilbenes. In almost all cases higher yields and fewer side products were observed, compared to the analogous 4-methoxy benzene diazonium salts. In contrast, the reaction fails completely with 2- and 3-phenol diazonium salts. For these substitution patterns the methoxy-substituted derivatives are superior.
The main intention of the PhD project was to create a varve chronology for the Suigetsu Varves 2006' (SG06) composite profile from Lake Suigetsu (Japan) by thin section microscopy. The chronology was not only to provide an age-scale for the various palaeo-environmental proxies analysed within the SG06 project, but also and foremost to contribute, in combination with the SG06 14C chronology, to the international atmospheric radiocarbon calibration curve (IntCal). The SG06 14C data are based on terrestrial leaf fossils and therefore record atmospheric 14C values directly, avoiding the corrections necessary for the reservoir ages of the marine datasets, which are currently used beyond the tree-ring limit in the IntCal09 dataset (Reimer et al., 2009). The SG06 project is a follow up of the SG93 project (Kitagawa & van der Plicht, 2000), which aimed to produce an atmospheric calibration dataset, too, but suffered from incomplete core recovery and varve count uncertainties. For the SG06 project the complete Lake Suigetsu sediment sequence was recovered continuously, leaving the task to produce an improved varve count. Varve counting was carried out using a dual method approach utilizing thin section microscopy and micro X-Ray Fluorescence (µXRF). The latter was carried out by Dr. Michael Marshall in cooperation with the PhD candidate. The varve count covers 19 m of composite core, which corresponds to the time frame from ≈10 to ≈40 kyr BP. The count result showed that seasonal layers did not form in every year. Hence, the varve counts from either method were incomplete. This rather common problem in varve counting is usually solved by manual varve interpolation. But manual interpolation often suffers from subjectivity. Furthermore, sedimentation rate estimates (which are the basis for interpolation) are generally derived from neighbouring, well varved intervals. This assumes that the sedimentation rates in neighbouring intervals are identical to those in the incompletely varved section, which is not necessarily true. To overcome these problems a novel interpolation method was devised. It is computer based and automated (i.e. avoids subjectivity and ensures reproducibility) and derives the sedimentation rate estimate directly from the incompletely varved interval by statistically analysing distances between successive seasonal layers. Therefore, the interpolation approach is also suitable for sediments which do not contain well varved intervals. Another benefit of the novel method is that it provides objective interpolation error estimates. Interpolation results from the two counting methods were combined and the resulting chronology compared to the 14C chronology from Lake Suigetsu, calibrated with the tree-ring derived section of IntCal09 (which is considered accurate). The varve and 14C chronology showed a high degree of similarity, demonstrating that the novel interpolation method produces reliable results. In order to constrain the uncertainties of the varve chronology, especially the cumulative error estimates, U-Th dated speleothem data were used by linking the low frequency 14C signal of Lake Suigetsu and the speleothems, increasing the accuracy and precision of the Suigetsu calibration dataset. The resulting chronology also represents the age-scale for the various palaeo-environmental proxies analysed in the SG06 project. One proxy analysed within the PhD project was the distribution of event layers, which are often representatives of past floods or earthquakes. A detailed microfacies analysis revealed three different types of event layers, two of which are described here for the first time for the Suigetsu sediment. The types are: matrix supported layers produced as result of subaqueous slope failures, turbidites produced as result of landslides and turbidites produced as result of flood events. The former two are likely to have been triggered by earthquakes. The vast majority of event layers was related to floods (362 out of 369), which allowed the construction of a respective chronology for the last 40 kyr. Flood frequencies were highly variable, reaching their greatest values during the global sea level low-stand of the Glacial, their lowest values during Heinrich Event 1. Typhoons affecting the region represent the most likely control on the flood frequency, especially during the Glacial. However, also local, non-climatic controls are suggested by the data. In summary, the work presented here expands and revises knowledge on the Lake Suigetsu sediment and enabls the construction of a far more precise varve chronology. The 14C calibration dataset is the first such derived from lacustrine sediments to be included into the (next) IntCal dataset. References: Kitagawa & van der Plicht, 2000, Radiocarbon, Vol 42(3), 370-381 Reimer et al., 2009, Radiocarbon, Vol 51(4), 1111-1150
In Germany, active bat rabies surveillance was conducted between 1993 and 2012. A total of 4546 oropharyngeal swab samples from 18 bat species were screened for the presence of EBLV-1- , EBLV-2- and BBLV-specific RNA. Overall, 0 center dot 15% of oropharyngeal swab samples tested EBLV-1 positive, with the majority originating from Eptesicus serotinus. Interestingly, out of seven RT-PCR-positive oropharyngeal swabs subjected to virus isolation, viable virus was isolated from a single serotine bat (E. serotinus). Additionally, about 1226 blood samples were tested serologically, and varying virus neutralizing antibody titres were found in at least eight different bat species. The detection of viral RNA and seroconversion in repeatedly sampled serotine bats indicates long-term circulation of the virus in a particular bat colony. The limitations of random-based active bat rabies surveillance over passive bat rabies surveillance and its possible application of targeted approaches for future research activities on bat lyssavirus dynamics and maintenance are discussed.
Low molecular weight organic acids (LMWOAs) are important nutrients for microbes. However, most LMWOAs do not exist freely in the environment but are bound to macromolecular organic matter, e.g. kerogen, lignite and coal. During burial and geological maturation of sedimentary macromolecular organic matter biological and abiological processes promote the liberation of LMWOAs into the surrounding sediment. Through this process, microbes in sedimentary subsurface environments are supplied with essential nutrients. To estimate the feedstock potential of buried macromolecular organic matter to many environments it is important to determine the amount of LMWOAs that are bound to such a matrix. However, high-pressure and high temperature are a key feature of deep subsurface environments, and these physical parameters have a profound influence on chemical reaction kinetics. Therefore it is essential for the estimation of the feedstock potential to generate high-pressure and high temperature for the liberation of LMWOAs to recreate true in-situ conditions. This work presents a newly developed, inexpensive incubation system for biological and geological samples. It allows the application of high-pressure and high temperature as well as a subsampling of the liquid phase without loss of pressure, thereby not disturbing the on-going processes. When simulating the liberation of LMWOAs from sedimentary organic matter, the newly developed incubation system produces more realistic results than other extraction systems like Soxhlet. The extraction products remain in the extraction medium throughout the extraction, influencing the chemical conditions of the extraction medium. Sub-bituminous coal samples from New Zealand as well as lignite samples from Germany were extracted at elevated temperature (90˚C) and pressure (5 MPa). The main LMWOAs released from these low rank coals were formate, acetate and oxalate. Extraction efficiency was increased by two to four times for formate, acetate and oxalate in comparison to existing extraction methods without pressurisation and with demineralised water. This shows the importance of pressure for the simulation of true in-situ conditions and suggests that the amount of bioavailable LMWOAs is higher than previously thought. With the increase in carbon capture and storage (CCS) and the enhanced recovery of oil and gas (EOR/EGR), more and more CO2 becomes injected into the underground. However, the effects of elevated concentrations of carbon dioxide on sedimentary organic matter are rarely investigated. As the incuabtion system allows the manipulation of the composition and partial pressure of dissolved gasses, the effect of highly gas-enriched (CO2, CO2/SO2, CO2/NO2; to simulate flue gas conditions) waters on the extraction yield of LMWOAs from macromolecular organic matter was evaluated. For sub-bituminous coal the concentrations of all LMWAOs decreased upon the addition of gas, irrespective of its composition, whereas for lignite formate always and acetate mostly increased, while oxalate decreased. This suggests an positive effect on the nutrient supply for the subsurface microbiota of lignite layers, as formate and acetate are the most common LMWOAs used for microbial metabolism. In terrestrial mud volcanoes (TMVs), sedimentary material is rapidly ascending from great depth to the surface. Therefore LMWOAs that were produced from buried macromolecular organic matter at depth are also brought up to the surface, and fuel heterotrophic microbial ecosystems at the surface. TMVs represent geochemically and microbiologically diverse habitats, which are supplied with organic substrates and electron acceptors from deep-seated hydrocarbon-generating systems and intersected shallow aquifers, respectively. The main electron donor in TMVs in Azerbaijan is sulphate, and microbial sulphate reduction leads to the production of a wide range of reduced sulphur species that are key players in several biological processes. In our study we estimated the effect of LMWOAs on the sulphur metabolising activity of microorganims in TMVs from Azerbaijan. The addition of a mixture of volatile fatty acids containing acetate and other LMWOAs showed significant positive response to the sulphate reduction rate (SRR) of samples of several mud volcanoes. Further investigations on the temperature dependency of the SRR and the characterisation of thermophilic sulphate-reducing bacteria (SRB) showed a connection between the deep hot subsurface and the surface.
In this work, thermosensitive hydrogels having tunable thermo-mechanical properties were synthesized. Generally the thermal transition of thermosensitive hydrogels is based on either a lower critical solution temperature (LCST) or critical micelle concentration/ temperature (CMC/ CMT). The temperature dependent transition from sol to gel with large volume change may be seen in the former type of thermosensitive hydrogels and is negligible in CMC/ CMT dependent systems. The change in volume leads to exclusion of water molecules, resulting in shrinking and stiffening of system above the transition temperature. The volume change can be undesired when cells are to be incorporated in the system. The gelation in the latter case is mainly driven by micelle formation above the transition temperature and further colloidal packing of micelles around the gelation temperature. As the gelation mainly depends on concentration of polymer, such a system could undergo fast dissolution upon addition of solvent. Here, it was envisioned to realize a thermosensitive gel based on two components, one responsible for a change in mechanical properties by formation of reversible netpoints upon heating without volume change, and second component conferring degradability on demand. As first component, an ABA triblockcopolymer (here: Poly(ethylene glycol)-b-poly(propylene glycol)-b-poly(ethylene glycol) (PEPE) with thermosensitive properties, whose sol-gel transition on the molecular level is based on micellization and colloidal jamming of the formed micelles was chosen, while for the additional macromolecular component crosslinking the formed micelles biopolymers were employed. The synthesis of the hydrogels was performed in two ways, either by physical mixing of compounds showing electrostatic interactions, or by covalent coupling of the components. Biopolymers (here: the polysaccharides hyaluronic acid, chondroitin sulphate, or pectin, as well as the protein gelatin) were employed as additional macromolecular crosslinker to simultaneously incorporate an enzyme responsiveness into the systems. In order to have strong ionic/electrostatic interactions between PEPE and polysaccharides, PEPE was aminated to yield predominantly mono- or di-substituted PEPEs. The systems based on aminated PEPE physically mixed with HA showed an enhancement in the mechanical properties such as, elastic modulus (G′) and viscous modulus (G′′) and a decrease of the gelation temperature (Tgel) compared to the PEPE at same concentration. Furthermore, by varying the amount of aminated PEPE in the composition, the Tgel of the system could be tailored to 27-36 °C. The physical mixtures of HA with di-amino PEPE (HA·di-PEPE) showed higher elastic moduli G′ and stability towards dissolution compared to the physical mixtures of HA with mono-amino PEPE (HA·mono-PEPE). This indicates a strong influence of electrostatic interaction between –COOH groups of HA and –NH2 groups of PEPE. The physical properties of HA with di-amino PEPE (HA·di-PEPE) compare beneficially with the physical properties of the human vitreous body, the systems are highly transparent, and have a comparable refractive index and viscosity. Therefore,this material was tested for a potential biological application and was shown to be non-cytotoxic in eluate and direct contact tests. The materials will in the future be investigated in further studies as vitreous body substitutes. In addition, enzymatic degradation of these hydrogels was performed using hyaluronidase to specifically degrade the HA. During the degradation of these hydrogels, increase in the Tgel was observed along with decrease in the mechanical properties. The aminated PEPE were further utilised in the covalent coupling to Pectin and chondroitin sulphate by using EDC as a coupling agent. Here, it was possible to adjust the Tgel (28-33 °C) by varying the grafting density of PEPE to the biopolymer. The grafting of PEPE to Pectin enhanced the thermal stability of the hydrogel. The Pec-g-PEPE hydrogels were degradable by enzymes with slight increase in Tgel and decrease in G′ during the degradation time. The covalent coupling of aminated PEPE to HA was performed by DMTMM as a coupling agent. This method of coupling was observed to be more efficient compared to EDC mediated coupling. Moreover, the purification of the final product was performed by ultrafiltration technique, which efficiently removed the unreacted PEPE from the final product, which was not sufficiently achieved by dialysis. Interestingly, the final products of these reaction were in a gel state and showed enhancement in the mechanical properties at very low concentrations (2.5 wt%) near body temperature. In these hydrogels the resulting increase in mechanical properties was due to the combined effect of micelle packing (physical interactions) by PEPE and covalent netpoints between PEPE and HA. PEPE alone or the physical mixtures of the same components were not able to show thermosensitive behavior at concentrations below 16 wt%. These thermosensitive hydrogels also showed on demand solubilisation by enzymatic degradation. The concept of thermosensitivity was introduced to 3D architectured porous hydrogels, by covalently grafting the PEPE to gelatin and crosslinking with LDI as a crosslinker. Here, the grafted PEPE resulted in a decrease in the helix formation in gelatin chains and after fixing the gelatin chains by crosslinking, the system showed an enhancement in the mechanical properties upon heating (34-42 °C) which was reversible upon cooling. A possible explanation of the reversible changes in mechanical properties is the strong physical interactions between micelles formed by PEPE being covalently linked to gelatin. Above the transition temperature, the local properties were evaluated by AFM indentation of pore walls in which an increase in elastic modulus (E) at higher temperature (37 °C) was observed. The water uptake of these thermosensitive architectured porous hydrogels was also influenced by PEPE and temperature (25 °C and 37 °C), showing lower water up take at higher temperature and vice versa. In addition, due to the lower water uptake at high temperature, the rate of hydrolytic degradation of these systems was found to be decreased when compared to pure gelatin architectured porous hydrogels. Such temperature sensitive architectured porous hydrogels could be important for e.g. stem cell culturing, cell differentiation and guided cell migration, etc. Altogether, it was possible to demonstrate that the crosslinking of micelles by a macromolecular crosslinker increased the shear moduli, viscosity, and stability towards dissolution of CMC-based gels. This effect could be likewise be realized by covalent or non-covalent mechanisms such as, micelle interactions, physical interactions of gelatin chains and physical interactions between gelatin chains and micelles. Moreover, the covalent grafting of PEPE will create additional net-points which also influence the mechanical properties of thermosensitive architectured porous hydrogels. Overall, the physical and chemical interactions and reversible physical interactions in such thermosensitive architectured porous hydrogels gave a control over the mechanical properties of such complex system. The hydrogels showing change of mechanical properties without a sol-gel transition or volume change are especially interesting for further study with cell proliferation and differentiation.
Famously, Einstein read off the geometry of spacetime from Maxwell's equations. Today, we take this geometry that serious that our fundamental theory of matter, the standard model of particle physics, is based on it. However, it seems that there is a gap in our understanding if it comes to the physics outside of the solar system. Independent surveys show that we need concepts like dark matter and dark energy to make our models fit with the observations. But these concepts do not fit in the standard model of particle physics. To overcome this problem, at least, we have to be open to matter fields with kinematics and dynamics beyond the standard model. But these matter fields might then very well correspond to different spacetime geometries. This is the basis of this thesis: it studies the underlying spacetime geometries and ventures into the quantization of those matter fields independently of any background geometry. In the first part of this thesis, conditions are identified that a general tensorial geometry must fulfill to serve as a viable spacetime structure. Kinematics of massless and massive point particles on such geometries are introduced and the physical implications are investigated. Additionally, field equations for massive matter fields are constructed like for example a modified Dirac equation. In the second part, a background independent formulation of quantum field theory, the general boundary formulation, is reviewed. The general boundary formulation is then applied to the Unruh effect as a testing ground and first attempts are made to quantize massive matter fields on tensorial spacetimes.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
Diet is a major force influencing the intestinal microbiota. This is obvious from drastic changes in microbiota composition after a dietary alteration. Due to the complexity of the commensal microbiota and the high inter-individual variability, little is known about the bacterial response at the cellular level. The objective of this work was to identify mechanisms that enable gut bacteria to adapt to dietary factors. For this purpose, germ-free mice monoassociated with the commensal Escherichia coli K-12 strain MG1655 were fed three different diets over three weeks: a diet rich in starch, a diet rich in non-digestible lactose and a diet rich in casein. Two dimensional gel electrophoresis and electrospray tandem mass spectrometry were applied to identify differentially expressed proteins of E. coli recovered from small intestine and caecum of mice fed the lactose or casein diets in comparison with those of mice fed the starch diet. Selected differentially expressed bacterial proteins were characterised in vitro for their possible roles in bacterial adaptation to the various diets. Proteins belonging to the oxidative stress regulon oxyR such as alkyl hydroperoxide reductase subunit F (AhpF), DNA protection during starvation protein (Dps) and ferric uptake regulatory protein (Fur), which are required for E. coli’s oxidative stress response, were upregulated in E. coli of mice fed the lactose-rich diet. Reporter gene analysis revealed that not only oxidative stress but also carbohydrate-induced osmotic stress led to the OxyR-dependent expression of ahpCF and dps. Moreover, the growth of E. coli mutants lacking the ahpCF or oxyR genes was impaired in the presence of non-digestible sucrose. This indicates that some OxyR-dependent proteins are crucial for the adaptation of E. coli to osmotic stress conditions. In addition, the function of two so far poorly characterised E. coli proteins was analysed: 2 deoxy-D gluconate 3 dehydrogenase (KduD) was upregulated in intestinal E. coli of mice fed the lactose-rich diet and this enzyme and 5 keto 4 deoxyuronate isomerase (KduI) were downregulated on the casein-rich diet. Reporter gene analysis identified galacturonate and glucuronate as inducers of the kduD and kduI gene expression. Moreover, KduI was shown to facilitate the breakdown of these hexuronates, which are normally degraded by uronate isomerase (UxaC), altronate oxidoreductase (UxaB), altronate dehydratase (UxaA), mannonate oxidoreductase (UxuB) and mannonate dehydratase (UxuA), whose expression was repressed by osmotic stress. The growth of kduID-deficient E. coli on galacturonate or glucuronate was impaired in the presence of osmotic stress, suggesting KduI and KduD to compensate for the function of the regular hexuronate degrading enzymes under such conditions. This indicates a novel function of KduI and KduD in E. coli’s hexuronate metabolism. Promotion of the intracellular formation of hexuronates by lactose connects these in vitro observations with the induction of KduD on the lactose-rich diet. Taken together, this study demonstrates the crucial influence of osmotic stress on the gene expression of E. coli enzymes involved in stress response and metabolic processes. Therefore, the adaptation to diet-induced osmotic stress is a possible key factor for bacterial colonisation of the intestinal environment.
Rhythm is a temporal and systematic organization of acoustic events in terms of prominence, timing and grouping, helping to structure our most basic experiences, such as body movement, music and speech. In speech, rhythm groups auditory events, e.g., sounds and pauses, together into words, making their boundaries acoustically prominent and aiding word segmentation and recognition by the hearer. After word recognition, the hearer is able to retrieve word meaning form his mental lexicon, integrating it with information from other linguistic domains, such as semantics, syntax and pragmatics, until comprehension is achieved. The importance of speech rhythm, however, is not restricted to word segmentation and recognition only. Beyond the word level rhythm continues to operate as an organization device, interacting with different linguistic domains, such as syntax and semantics, and grouping words into larger prosodic constituents, organized in a prosodic hierarchy. This dissertation investigates the function of speech rhythm as a sentence segmentation device during syntactic ambiguity processing, possible limitations on its use, i.e., in the context of second language processing, and its transferability as cognitive skill to the music domain.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research programme on basic human values. The main topic of its second stage was family values in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
Inhibition, attentional control, and causes of forgetting in working memory: a formal approach
(2013)
In many cognitive activities, the temporary maintenance and manipulation of mental objects is a necessary step in order to reach a cognitive goal. Working memory has been regarded as the process responsible for those cognitive activities. This thesis addresses the question: what limits working-memory capacity (WMC)? A question that still remains controversial (Barrouillet & Camos, 2009; Lewandowsky, Oberauer, & Brown, 2009). This study attempted to answer this question by proposing that the dynamics between the causes of forgetting and the processes helping the maintenance, and the manipulation of the memoranda are the key aspects in understanding the limits of WMC.
Chapter 1 introduced key constructs and the strategy to examine the dynamics between inhibition, attentional control, and the causes of forgetting in working memory.
The study in Chapter 2 tested the performance of children, young adults, and old adults in a working-memory updating-task with two conditions: one condition included go steps and the other condition included go, and no-go steps. The interference model (IM; Oberauer & Kliegl, 2006), a model proposing interference-related mechanisms as the main cause of forgetting was used to simultaneously fit the data of these age groups. In addition to the interference-related parameters reflecting interference by feature overwriting and interference by confusion, and in addition to the parameters reflecting the speed of processing, the study included a new parameter that captured the time for switching between go steps and no-go steps. The study indicated that children and young adults were less susceptible than old adults to interference by feature overwriting; children were the most susceptible to interference by confusion, followed by old adults and then by young adults; young adults presented the higher rate of processing, followed by children and then by old adults; and young adults were the fastest group switching from go steps to no-go steps.
Chapter 3 examined the dynamics between causes of forgetting and the inhibition of a prepotent response in the context of three formal models of the limits of WMC: A resources model, a decay-based model, and three versions of the IM. The resources model was built on the assumption that a limited and shared source of activation for the maintenance and manipulation of the objects underlies the limits of WMC. The decay model assumes that memory traces of the working-memory objects decay over time if they are not reactivated via different mechanisms of maintenance. The IM, already described, proposes that interference-related mechanisms explain the limits of WMC. In two experiments and in a reanalysis of data of the second experiment, one version of the IM received more statistical support from the data. This version of the IM proposes that interference by feature overwriting and interference by confusion are the main factors underlying the limits of WMC. In addition, the model suggests that experimental conditions involving the inhibition of a prepotent response reduce the speed of processing and promotes the involuntary activation of irrelevant information in working memory.
Chapter 4 summed up Chapter 2 and 3 and discussed their findings and presented how this thesis has provided evidence of interference-related mechanisms as the main cause of forgetting, and it has attempted to clarify the role of inhibition and attentional control in working memory. With the implementation of formal models and experimental manipulations in the framework of nonlinear mixed models the data offered explanations of causes of forgetting and the role of inhibition in WMC at different levels: developmental effects, aging effects, effects related to experimental manipulations and individual differences in these effects. Thus, the present approach afforded a comprehensive view of a large number of factors limiting WMC.
Various synthetic approaches were explored towards the preparation of poly(N-substituted glycine) homo/co-polymers (a.k.a. polypeptoids). In particular, monomers that would facilitate in the preparation of bio-relevant polymers via either chain- or step-growth polymerization were targeted. A 3-step synthetic approach towards N-substituted glycine N-carboxyanhydrides (NNCA) was implemented, or developed, and optimized allowing for an efficient gram scale preparation of the aforementioned monomer (chain-growth). After exploring several solvents and various conditions, a reproducible and efficient ring-opening polymerization (ROP) of NNCAs was developed in benzonitrile (PhCN). However, achieving molecular weights greater than 7 kDa required longer reaction times (>4 weeks) and sub-sequentially allowed for undesirable competing side reactions to occur (eg. zwitterion monomer mechanisms). A bulk-polymerization strategy provided molecular weights up to 11 kDa within 24 hours but suffered from low monomer conversions (ca. 25%). Likewise, a preliminary study towards alcohol promoted ROP of NNCAs suffered from impurities and a suspected alternative activated monomer mechanism (AAMM) providing poor inclusion of the initiator and leading to multi-modal dispersed polymeric systems. The post-modification of poly(N-allyl glycine) via thiol-ene photo-addition was observed to be quantitative, with the utilization of photo-initiators, and facilitated in the first glyco-peptoid prepared under environmentally benign conditions. Furthermore, poly(N-allyl glycine) demonstrated thermo-responsive behavior and could be prepared as a semi-crystalline bio-relevant polymer from solution (ie. annealing). Initial efforts in preparing these polymers via standard poly-condensation protocols were insufficient (step-growth). However, a thermally induced side-product, diallyl diketopiperazine (DKP), afforded the opportunity to explore photo-induced thiol-ene and acyclic diene metathesis (ADMET) polymerizations. Thiol-ene polymerization readily led to low molecular weight polymers (<2.5 kDa), that were insoluble in most solvents except heated amide solvents (ie. DMF), whereas ADMET polymerization, with diallyl DKP, was unsuccessful due to a suspected 6 member complexation/deactivation state of the catalyst. This understanding prompted the preparation of elongated DKPs most notably dibutenyl DKP. SEC data supports the aforementioned understanding but requires further optimization studies in both the preparation of the DKP monomers and following ADMET polymerization. This work was supported by NMR, GC-MS, FT-IR, SEC-IR, and MALDI-Tof MS characterization. Polymer properties were measured by UV-Vis, TGA, and DSC.
This cumulative dissertation explored the use of the detection of natural background of fast neutrons, the so-called cosmic-ray neutron sensing (CRS) approach to measure field-scale soil moisture in cropped fields. Primary cosmic rays penetrate the top atmosphere and interact with atmospheric particles. Such interaction results on a cascade of high-energy neutrons, which continue traveling through the atmospheric column. Finally, neutrons penetrate the soil surface and a second cascade is produced with the so-called secondary cosmic-ray neutrons (fast neutrons). Partly, fast neutrons are absorbed by hydrogen (soil moisture). Remaining neutrons scatter back to the atmosphere, where its flux is inversely correlated to the soil moisture content, therefore allowing a non-invasive indirect measurement of soil moisture. The CRS methodology is mainly evaluated based on a field study carried out on a farmland in Potsdam (Brandenburg, Germany) along three crop seasons with corn, sunflower and winter rye; a bare soil period; and two winter periods. Also, field monitoring was carried out in the Schaefertal catchment (Harz, Germany) for long-term testing of CRS against ancillary data. In the first experimental site, the CRS method was calibrated and validated using different approaches of soil moisture measurements. In a period with corn, soil moisture measurement at the local scale was performed at near-surface only, and in subsequent periods (sunflower and winter rye) sensors were placed in three depths (5 cm, 20 cm and 40 cm). The direct transfer of CRS calibration parameters between two vegetation periods led to a large overestimation of soil moisture by the CRS. Part of this soil moisture overestimation was attributed to an underestimation of the CRS observation depth during the corn period ( 5-10 cm), which was later recalculated to values between 20-40 cm in other crop periods (sunflower and winter rye). According to results from these monitoring periods with different crops, vegetation played an important role on the CRS measurements. Water contained also in crop biomass, above and below ground, produces important neutron moderation. This effect was accounted for by a simple model for neutron corrections due to vegetation. It followed crop development and reduced overall CRS soil moisture error for periods of sunflower and winter rye. In Potsdam farmland also inversely-estimated soil hydraulic parameters were determined at the field scale, using CRS soil moisture from the sunflower period. A modelling framework coupling HYDRUS-1D and PEST was applied. Subsequently, field-scale soil hydraulic properties were compared against local scale soil properties (modelling and measurements). Successful results were obtained here, despite large difference in support volume. Simple modelling framework emphasizes future research directions with CRS soil moisture to parameterize field scale models. In Schaefertal catchment, CRS measurements were verified using precipitation and evapotranspiration data. At the monthly resolution, CRS soil water storage was well correlated to these two weather variables. Also clearly, water balance could not be closed due to missing information from other compartments such as groundwater, catchment discharge, etc. In the catchment, the snow influence to natural neutrons was also evaluated. As also observed in Potsdam farmland, CRS signal was strongly influenced by snow fall and snow accumulation. A simple strategy to measure snow was presented for Schaefertal case. Concluding remarks of this dissertation showed that (a) the cosmic-ray neutron sensing (CRS) has a strong potential to provide feasible measurement of mean soil moisture at the field scale in cropped fields; (b) CRS soil moisture is strongly influenced by other environmental water pools such as vegetation and snow, therefore these should be considered in analysis; (c) CRS water storage can be used for soil hydrology modelling for determination of soil hydraulic parameters; and (d) CRS approach has strong potential for long term monitoring of soil moisture and for addressing studies of water balance.
In children the way of life, nutrition and recreation changed in recent years and as a consequence body composition shifted as well. It is established that overweight belongs to a global problem. In addition, German children exhibit a less robust skeleton than ten years ago. These developments may elevate the risk of cardiovascular diseases and skeletal modifications. Heredity and environmental factors as nutrition, socioeconomic status, physical activity and inactivity influence fat accumulation and the skeletal system. Based on these negative developments associations between type of body shape, skeletal measures and physical activity; relations between external skeletal robustness, physical activity and inactivity, BMI and body fat and also the progress of body composition especially external skeletal robustness in comparison in Russian and German children were investigated. In a cross-sectional study 691 German boys and girls aged 6 to 10 years were examined. Anthropometric measurements were taken and questionnaires about physical activity and inactivity were answered by parents. Additionally, pedometers were worn to determinate the physical activity in children. To compare the body composition in Russian and German children data from the years 2000 and 2010 were used. The study has shown that pyknomorphic individuals exhibit the highest external skeletal robustness and leptomorphic ones the lowest. Leptomorphic children may have a higher risk for bone diseases in adulthood. Pyknomorphic boys are more physically active by tendency. This is assessed as positive because pyknomorphic types display the highest BMI and body fat. Results showed that physical activity may reduce BMI and body fat. In contrast physical inactivity may lead to an increase of BMI and body fat and may rise with increasing age. Physical activity encourages additionally a robust skeleton. Furthermore external skeletal robustness is associated with BMI in order that BMI as a measure of overweight should be consider critically. The international 10-year comparison has shown an increase of BMI in Russian children and German boys. Currently, Russian children exhibit a higher external skeletal robustness than the Germans. However, in Russian boys skeleton is less robust than ten years ago. This trend should be observed in the future as well in other countries. All in all, several measures should be used to describe health situation in children and adults. Furthermore, in children it is essential to support physical activity in order to reduce the risk of obesity and to maintain a robust skeleton. In this way diseases are able to prevent in adulthood.
This article presents the results of a study on the interpretation and acceptance of adjectival resultatives of German children between 6 and 9 years of age and adults. These results brought to light significant differences, due to age, in the interpretation and acceptance of these resultatives, that is to say, sentences with an adjective in the final position. The youngest participants were prone to accept ungrammatical sentences by assigning a resultative meaning. The ungrammaticality of the sentences in question was not due to semantic inconsistencies but to violations of the selectional properties of verbs, as for instance in *die Kinder erschrecken die Katze ängstlich ‘the children frighten the cat scared’. In contrast, the adults rejected or amended those sentences. The conclusion is (a) that the children seemed to rely on the sentence structure as a primary cue to compute the meaning of an utterance and (b) that, in contrast with adults, the youngest children in particular had not yet learned the relevant semantic properties of verbs that determine the selectional restrictions and thus the syntactic options of verbs. This means that differences in interpretation and acceptance of sentences are due to differences in knowledge of semantic verb properties between adults and children. The relevant semantic knowledge increases in gradual stages during language acquisition.
Die Arbeit thematisiert die Veränderungen im deutschen Wissenschafts- und Hochschulsystem. Im Mittelpunkt steht die "unternehmerische Mission" von Universitäten. Der Blick wird auf das Aufgabenfeld Wissens- und Technologietransfer (WTT) gerichtet. Anhand dessen werden die Veränderungen, die innerhalb des deutschen Universitätssystems in den vergangenen Jahren erfolgten, nachgezeichnet. Die Erwartungshaltungen an Universitäten haben sich verändert. Ökonomische Sichtweisen nehmen einen immer größeren Stellenwert ein. Die Arbeit baut auf den Prämissen der neoinstitutionalistischen Organisationstheorie auf. Anhand dieser wird gezeigt, wie Erwartungen externer Stakeholder Eingang in Hochschulen finden und sich auf ihre organisatorische Ausgestaltung auswirken. Der Arbeit liegt ein exploratives, qualitatives Untersuchungsdesign zugrunde. In einer Fallstudie werden zwei Universitäten als Fallbeispiele untersucht. Die Untersuchung liefert Antworten auf die Fragen, wie der WTT als Aufgabenbereich an deutschen Universitäten umgesetzt wird, welche Strukturen sich herausgebildet haben und inwieweit eine Institutionalisierung des WTTs an Universitäten erfolgt ist. In der Arbeit werden verschiedene Erhebungsinstrumente im Rahmen einer Triangulation genutzt. Experteninterviews bilden das Hauptanalyseinstrument. Ziel der Untersuchung ist neben der Beantwortung der Forschungsfragen, Hypothesen zu bilden, die für weiterführende Untersuchungen genutzt werden können. Darüber hinaus werden Handlungsempfehlungen für die Umsetzung des WTTs an deutschen Hochschulen gegeben. Die Arbeit richtet sich sowohl an Wissenschaftler als auch Praktiker aus dem Bereich Wissens- und Technologietransfer.
Der Einfluss von Bildung gewinnt gesellschaftlich und politisch an Bedeutung. Auch im wissenschaftlichen Bereich zeigt sich dies über eine vielseitige Diskussion zum Einfluss von Bildung auf das Einkommen. In dieser Arbeit werden nationale und regionale Disparitäten in der monetären Wertschätzung von allgemeinem Humankapital aufgedeckt und diskutiert. Dafür werden verschiedene Verfahren diskutiert und basierend darauf Intervalle für die mittleren Bildungsrenditen bestimmt. Im ersten Abschnitt wird die Thematik theoretisch über zwei verschiedene Modellansätze fundiert und kritisch diskutiert. Anschließend folgt die Darstellung des aktuellen empirischen Forschungsbestands. Der Hauptteil der Arbeit beginnt mit der Darstellung des verwendeten Datensatzes und seiner kritischen Repräsentativitätsprüfung. Eine nähere Variablenbeschreibung mit deskriptiver Analyse dient zur Erklärung der verwendeten Größen. Darauffolgend werden bestehende Verfahren zur Schätzung von Bildungsrenditen diskutiert. Unter ausschließlicher Berücksichtigung der Erwerbstätigen zeigt das 3SLS-Verfahren die besten Eigenschaften. Bezieht man jedoch alle Erwerbspersonen in die Analyse mit ein, so erweist sich das Heckman-Verfahren als sehr geeignet. Die Analyse - zunächst auf nationaler Ebene - bestätigt weitestgehend die bestehenden Erkenntnisse der Literatur. Eine Separierung des Datensatzes auf verschiedene Alterscluster, Voll- und Teilerwerbstätige sowie Erwerbstätige in der Privatwirtschaft und im öffentlichen Dienst zeigen keine signifikanten Unterschiede in der Höhe der gezahlten durchschnittlichen Bildungsrenditen. Anders verhält es sich bei der regionalen Analyse. Zunächst werden Ost- und Westdeutschland separat betrachtet. Für diese erste Analyse lassen sich über 95 %-Konfidenzintervalle deutliche Unterschiede in der Höhe der Bildungsrenditen ermitteln. Aufbauend auf diese Ergebnisse wird die Analyse vertieft. Eine Separierung auf Bundesländerebene und ein weiterer Vergleich der Konfidenzintervalle folgen. Zur besseren statistischen Vergleichbarkeit der Ergebnisse wird neben dem 3SLS-Verfahren, angewendet auf die separierten Datensätze, auch ein Modell ohne die Notwendigkeit der Separierung gewählt. Hierbei ist die Variation der Regionen über Interaktionsterme berücksichtigt. Dieses Regressionsmodell wird auf das OLS- und das Heckman-Verfahren angewendet. Der Vorteil hierbei ist, dass die Koeffizienten auf Gleichheit getestet werden können. Dabei kristallisieren sich deutlich unterschiedliche Bildungsrenditen für Mecklenburg-Vorpommern, aber auch für Sachsen-Anhalt und Thüringen im Vergleich zu den restlichen Bundesländern Deutschlands heraus. Diese Länder zeichnen sich durch eine besonders hohe jährliche Verzinsung von allgemeinem Humankapital aus. Es folgt eine Diskussion über mögliche Ursachen für die regional verschiedenen Bildungsrenditen. Dabei zeigt sich, dass in den Bundesländern mit hoher Rendite das mittlere Einkommensniveau und auch das durchschnittliche Preisniveau tendenziell geringer sind. Weiterhin wird deutlich, dass bei höheren relativen Abweichungen der durchschnittlichen Einkommen höhere Renditen zu verzeichnen sind. Auch die Wanderungsbewegungen je nach Qualifikation unterscheiden sich. Unter zusätzlicher Berücksichtigung der Arbeitslosenquoten zeigt sich in den Ländern mit hoher Rendite eine tendenziell höhere Arbeitslosigkeit. Im zusammenfassenden Fazit der Arbeit werden abschließend die Erkenntnisse gewürdigt. Dabei ist zu bemerken, dass der Beitrag einen Start in die bundesländerweite Analyse liefert, die eine Fortführung auf beispielsweise eine mehrperiodische Betrachtung anregt.
Was veranlasste die an „Belcantare Brandenburg“ beteiligten Bildungsinstitutionen, dieses Projekt zu verwirklichen? Sind ländliche und städtische Singprojekte gleichermaßen zu planen? Wie wirksam war und ist „Belcantare Brandenburg“? Diesen u. a. Fragen widmet sich die repräsentative Dokumentation, die im Rahmen der wissenschaftlichen Begleitung des Projektes durch den Lehrstuhl Musikpädagogik und Musikdidaktik der Universität Potsdam entstanden ist. Vorderstes Anliegen war es hierbei, die erarbeiteten Fragestellungen aus unterschiedlichen Forschungsperspektiven zu beantworten. An dieser Forschungsarbeit wirkten Studierende mit und reflektierten in ihren wissenschaftlichen Qualifikationsarbeiten im Projekt erschlossene theoretische und praktische Erfahrungen zum Singen in der Grundschule. „Belcantare Brandenburg“ ist ein Singprojekt zur Fortbildung für Grundschullehrerinnen und Grundschullehrer, das mit freundlicher Unterstützung der Ostdeutschen Sparkassenstiftung und der Sparkasse Uckermark über einen Zeitraum von zwei Jahren die Qualität der Singarbeit von Lehrkräften aus der Uckermark zielgerichtet weiterentwickelte.
Inhalt: Günther Unser: UN-Profile kleiner und mittlerer Staaten am Beispiel der Schweiz, Österreichs und Liechtensteins Johannes Varwick: Deutschland und die UNO: eine abgekühlte Freundschaft? Dominik Steiger: Deutschland im System kollektiver Sicherheit Lilly Sucharipa-Behrmann: „UN Women“ – die United Nations Entity for Gender Equality and the Empowerment of Women: eine erste Bilanz Theodor Rathgeber: Der UN-Menschenrechtsrat: Was kann er leisten, was nicht?
Der vorliegende 3. Band der „Schriften zum deutschen und russischen Strafrecht“ enthält die Vorträge, die an dem „Internationalen rechtsvergleichenden Runden Tisch zu aktuellen Themen des deutschen und russischen Strafrechts“, der am 18. Dezember 2012 an der Juristischen Fakultät der Universität Potsdam stattgefunden hat, gehalten wurden. Die Beiträge der Wissenschaftler und Praktiker aus Russland und Deutschland decken ein breites Spektrum an Themen ab, die in beiden Ländern von Interesse sind. In den Band aufgenommen wurden zudem der Festvortrag von Prof. Dr. Rarog zum Tag der Juristischen Fakultät der Universität Potsdam am 20. Juni 2012, ein Beitrag von Prof. Dr. Matskevich sowie ein weiterer Aufsatz von Prof. Dr. Rarog und Dr. Nagaeva. Die Publikation der Beiträge in deutscher und russischer Sprache ermöglicht den Lesern in beiden Ländern die Lektüre in der jeweiligen Muttersprache.
Die Automatisierung von Geschäftsprozessen unterstützt Unternehmen, die Ausführung ihrer Prozesse effizienter zu gestalten. In existierenden Business Process Management Systemen, werden die Instanzen eines Prozesses völlig unabhängig voneinander ausgeführt. Jedoch kann das Synchronisieren von Instanzen mit ähnlichen Charakteristiken wie z.B. den gleichen Daten zu reduzierten Ausführungskosten führen. Zum Beispiel, wenn ein Onlinehändler zwei Bestellungen vom selben Kunden mit der gleichen Lieferanschrift erhält, können diese zusammen verpackt und versendet werden, um Versandkosten zu sparen. In diesem Papier verwenden wir Konzepte aus dem Datenbankbereich und führen Datensichten für Geschäftsprozesse ein, um Instanzen zu identifizieren, welche synchronisiert werden können. Auf Grundlage der Datensichten führen wir das Konzept der Batch-Regionen ein. Eine Batch-Region ermöglicht eine kontext-bewusste Instanzen-Synchronisierung über mehrere verbundene Aktivitäten. Das eingeführte Konzept wird mit einer Fallstudie evaluiert, bei der ein Kostenvergleich zwischen der normalen Prozessausführung und der Batchverarbeitung durchgeführt wird.
Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.
Nur langsam scheinen jene Schockwellen abzuebben, die ausgelöst durch die Ergebnisse der PISA-Erhebungen seit mehr als einem Jahrzehnt die Bildungsrepublik Deutschland durchqueren und weite Teile der Gesellschaft in den Zustand regelrechter Bildungspanik versetzten. An der Schwelle zum 21. Jahrhundert belegte eine Reihe von Studien für das wiedervereinte Deutschland eine im OECD-Vergleich besonders ausgeprägte Abhängigkeit des Bildungserfolges von der sozialen Herkunft. Als eine Konsequenz ist der Zugang zu tertiärer Bildung bis dato deutlich durch soziale Ungleichheit gekennzeichnet. Vor diesem Hintergrund leistet die vorliegende Dissertationsschrift einen wesentlichen Beitrag zur ursächlichen Erklärung von Mustern sozialer Selektivität, die an den Gelenkstellen zwischen sekundären und postsekundären Bildungsangeboten sichtbar werden. Auf innovative Weise verbindet die Arbeit ein zeitgemäßes handlungstheoretisches Modell mit einer komplexen Lebensstilanalyse. Die Analyse stützt sich auf Erhebungsdaten, die zwischen Januar und April 2010 an mehr als 30 weiterführenden Schulen des Bundeslandes Brandenburg erhoben wurden. Im Mittelpunkt des Forschungsinteresses steht einerseits die Identifikation von sozial-kognitiven Determinanten, die das Niveau und die Richtung postsekundärer Bildungsaspirationen maßgeblich vorstrukturieren sowie andererseits deren Verortung im Kontext jugendlicher Lebensstile. Das komplexe Analysedesign erweist sich als empirisch fruchtbar: So erbringt die Arbeit den empirischen Nachweis, dass die spezifischen Konfigurationen der bestätigten psychosozialen Prädiktoren nicht nur statistisch bedeutsam zwischen jugendlichen Stilmustern variieren, sondern sich diesbezüglich erfolgreiche von weniger erfolgreichen Typen unterscheiden lassen.
Portal alumni
(2013)
Begonnen hat es um 1900 mit Qualitätskontrolle, später folgte Qualitätsprüfung und seit Mitte der 90er Jahre hat das Thema Qualitätsmanagement Einzug gehalten in alle Bereiche von Politik und Gesellschaft. Im Gesundheitswesen, der Justiz und auch an vielen Hochschulen wurden spezielle Stellen eingerichtet, die sich gezielt mit der Umsetzung von Qualitätsentwicklung oder Qualitätsmanagement befassen. Das übergeordnete Ziel bei Einführung eines Qualitätsmanagements ist es, die Wettbewerbsfähigkeit der Organisation sowie auch die Zufriedenheit der Mitglieder der Institutionen und weiterer Stakeholder zu steigern, indem die Qualität der Tätigkeiten und der jeweiligen Rahmenbedingungen erhalten und optimiert wird. Mit der bewussten Entscheidung zur Qualitätssicherung und -entwicklung beginnt ein fortlaufender Prozess, der stetig intensiv begleitet werden muss. Qualitätsmanagement wirkt nachhaltig, in dem durch die Schaffung regelmäßiger und systematischer Strukturen und Prozesse auch zukünftig den beteiligten Personen ein optimales Handeln entsprechend der dann geltenden Bedingungen ermöglicht wird. Portal alumni widmet sich in seinem zehnten Heft diesem Thema und hat Absolventen der Universität Potsdam nach ihrem Tätigkeitsfeld im Qualitätsmanagement und den entsprechenden Erfolgen befragt. Dabei zeigt sich, dass Ehemalige Einsatzbereiche in Wirtschaft und Unternehmen aber auch in Sport, Bildung oder Hochschulen gefunden haben. Daneben berichten wir in diesem Heft von einem Projekt des Career Service, dem Schnupperjobben, und auch die Berichte über die Geschehnisse an Ihrer Alma mater kommen nicht zu kurz.
Organic semiconductors combine the benefits of organic materials, i.e., low-cost production, mechanical flexibility, lightweight, and robustness, with the fundamental semiconductor properties light absorption, emission, and electrical conductivity. This class of material has several advantages over conventional inorganic semiconductors that have led, for instance, to the commercialization of organic light-emitting diodes which can nowadays be found in the displays of TVs and smartphones. Moreover, organic semiconductors will possibly lead to new electronic applications which rely on the unique mechanical and electrical properties of these materials. In order to push the development and the success of organic semiconductors forward, it is essential to understand the fundamental processes in these materials. This thesis concentrates on understanding how the charge transport in thiophene-based semiconductor layers depends on the layer morphology and how the charge transport properties can be intentionally modified by doping these layers with a strong electron acceptor. By means of optical spectroscopy, the layer morphologies of poly(3-hexylthiophene), P3HT, P3HT-fullerene bulk heterojunction blends, and oligomeric polyquaterthiophene, oligo-PQT-12, are studied as a function of temperature, molecular weight, and processing conditions. The analyses rely on the decomposition of the absorption contributions from the ordered and the disordered parts of the layers. The ordered-phase spectra are analyzed using Spano’s model. It is figured out that the fraction of aggregated chains and the interconnectivity of these domains is fundamental to a high charge carrier mobility. In P3HT layers, such structures can be grown with high-molecular weight, long P3HT chains. Low and medium molecular weight P3HT layers do also contain a significant amount of chain aggregates with high intragrain mobility; however, intergranular connectivity and, therefore, efficient macroscopic charge transport are absent. In P3HT-fullerene blend layers, a highly crystalline morphology that favors the hole transport and the solar cell efficiency can be induced by annealing procedures and the choice of a high-boiling point processing solvent. Based on scanning near-field and polarization optical microscopy, the morphology of oligo-PQT-12 layers is found to be highly crystalline which explains the rather high field-effect mobility in this material as compared to low molecular weight polythiophene fractions. On the other hand, crystalline dislocations and grain boundaries are identified which clearly limit the charge carrier mobility in oligo-PQT-12 layers. The charge transport properties of organic semiconductors can be widely tuned by molecular doping. Indeed, molecular doping is a key to highly efficient organic light-emitting diodes and solar cells. Despite this vital role, it is still not understood how mobile charge carriers are induced into the bulk semiconductor upon the doping process. This thesis contains a detailed study of the doping mechanism and the electrical properties of P3HT layers which have been p-doped by the strong molecular acceptor tetrafluorotetracyanoquinodimethane, F4TCNQ. The density of doping-induced mobile holes, their mobility, and the electrical conductivity are characterized in a broad range of acceptor concentrations. A long-standing debate on the nature of the charge transfer between P3HT and F4TCNQ is resolved by showing that almost every F4TCNQ acceptor undergoes a full-electron charge transfer with a P3HT site. However, only 5% of these charge transfer pairs can dissociate and induce a mobile hole into P3HT which contributes electrical conduction. Moreover, it is shown that the left-behind F4TCNQ ions broaden the density-of-states distribution for the doping-induced mobile holes, which is due to the longrange Coulomb attraction in the low-permittivity organic semiconductors.
Numerical simulations of galaxy formation and observational Galactic Astronomy are two fields of research that study the same objects from different perspectives. Simulations try to understand galaxies like our Milky Way from an evolutionary point of view while observers try to disentangle the current structure and the building blocks of our Galaxy. Due to great advances in computational power as well as in massive stellar surveys we are now able to compare resolved stellar populations in simulations and in observations. In this thesis we use a number of approaches to relate the results of the two fields to each other. The major observational data set we refer to for this work comes from the Radial Velocity Experiment (RAVE), a massive spectroscopic stellar survey that observed almost half a million stars in the Galaxy. In a first study we use three different models of the Galaxy to generate synthetic stellar surveys that can be directly compared to the RAVE data. To do this we evaluate the RAVE selection function to great detail. Among the Galaxy models is the widely used Besancon model that performs well when individual parameter distribution are considered, but fails when we study chemodynamic correlations. The other two models are based on distributions of mass particles instead of analytical distribution functions. This is the first time that such models are converted to the space of observables and are compared to a stellar survey. We show that these models can be competitive and in some aspects superior to analytic models, because of their self-consistent dynamic history. In the case of a full cosmological simulation of disk galaxy formation we can recover features in the synthetic survey that relate to the known issues of the model and hence proof that our technique is sensitive to the global structure of the model. We argue that the next generation of cosmological galaxy formation simulations will deliver valuable models for our Galaxy. Testing these models with our approach will provide a direct connection between stellar Galactic astronomy and physical cosmology. In the second part of the thesis we use a sample of high-velocity halo stars from the RAVE data to estimate the Galactic escape speed and the virial mass of the Milky Way. In the course of this study cosmological simulations of galaxy formation also play a crucial role. Here we use them to calibrate and extensively test our analysis technique. We find the local Galactic escape speed to be 533 (+54/-41) km/s (90% confidence). With this result in combination with a simple mass model of the Galaxy we then construct an estimate of the virial mass of the Galaxy. For the mass profile of the dark matter halo we use two extreme models, a pure Navarro, Frenk & White (NFW) profile and an adiabatically contracted NFW profile. When we use statistics on the concentration parameter of these profile taken from large dissipationless cosmological simulations we obtain an estimate of the virial mass that is almost independent of the choice of the halo profile. For the mass M_340 enclosed within R_340 = 180 kpc we find 1.3 (+0.4/-0.3) x 10^12 M_sun. This value is in very good agreement with a number of other mass estimates in the literature that are based on independent data sets and analysis techniques. In the last part of this thesis we investigate a new possible channel to generate a population of Hypervelocity stars (HVSs) that is observed in the stellar halo. Commonly, it is assumed that the velocities of these stars originate from an interaction with the super-massive black hole in the Galactic center. It was suggested recently that stars stripped-off a disrupted satellite galaxy could reach similar velocities and leave the Galaxy. Here we study in detail the kinematics of tidal debris stars to investigate the probability that the observed sample of HVSs could partly originate from such a galaxy collision. We use a suite of $N$-body simulations following the encounter of a satellite galaxy with its Milky Way-type host galaxy. We quantify the typical pattern in angular and phase space formed by the debris stars and develop a simple model that predicts the kinematics of stripped-off stars. We show that the distribution of orbital energies in the tidal debris has a typical form that can be described quite accurately by a simple function. The main parameters determining the maximum energy kick a tidal debris star can get is the initial mass of the satellite and only to a lower extent its orbit. Main contributors to an unbound stellar population created in this way are massive satellites (M_sat > 10^9 M_sun). The probability that the observed HVS population is significantly contaminated by tidal debris stars appears small in the light of our results.
Introduction: We examined patterns of genetic divergence in 26 Mediterranean populations of the semi-terrestrial beachflea Orchestia montagui using mitochondrial (cytochrome oxidase subunit I), microsatellite (eight loci) and allozymic data. The species typically forms large populations within heaps of dead seagrass leaves stranded on beaches at the waterfront. We adopted a hierarchical geographic sampling to unravel population structure in a species living at the sea-land transition and, hence, likely subjected to dramatically contrasting forces.
Results: Mitochondrial DNA showed historical phylogeographic breaks among Adriatic, Ionian and the remaining basins (Tyrrhenian, Western and Eastern Mediterranean Sea) likely caused by the geological and climatic changes of the Pleistocene. Microsatellites (and to a lesser extent allozymes) detected a further subdivision between and within the Western Mediterranean and the Tyrrhenian Sea due to present-day processes. A pattern of isolation by distance was not detected in any of the analyzed data set.
Conclusions: We conclude that the population structure of O. montagui is the result of the interplay of two contrasting forces that act on the species population genetic structure. On one hand, the species semi-terrestrial life style would tend to determine the onset of local differences. On the other hand, these differences are partially counter-balanced by passive movements of migrants via rafting on heaps of dead seagrass leaves across sites by sea surface currents. Approximate Bayesian Computations support dispersal at sea as prevalent over terrestrial regionalism.
Gegenstand dieser Arbeit sind sog. nicht-kanonische bzw. unintegrierte Nebensätze. Diese Nebensätze zeichnen sich dadurch aus, dass sie sich mittels gängiger Kriterien (Satzgliedstatus, Verbletztstellung) nicht klar als koordiniert oder subordiniert beschreiben lassen. Das Phänomen nicht-kanonischer Nebensätze ist ein Thema, welches in der Sprachwissenschaft generell seit den späten Siebzigern (Davison 1979) diskutiert wird und spätestens mit Fabricius-Hansen (1992) auch innerhalb der germanistischen Linguistik angekommen ist. Ein viel beachteter Komplex ist hierbei – neben der reinen Identifizierung nicht-kanonischer Satzgefüge – meist auch die Erstellung einer Klassifikation zur Erfassung zumindest einiger nicht-kanonischer Gefüge, wie dies etwa bei Fabricius-Hansen (1992) und Reis (1997) zu sehen ist. Das Ziel dieser Studie ist es, eine exhaustive Klassifikation der angesprochenen Nebensatztypen vorzunehmen. Dazu werden zunächst – unter Zuhilfenahme von Korpusdaten – alle potentiellen Subordinationsmerkmale genauer untersucht, da die meisten bisherigen Studien zu diesem Thema die stets gleichen Merkmale als gegeben voraussetzen. Dabei wird sich herausstellen, dass nur eine kleine Anzahl von Merkmalen sich wirklich zweifelsfrei dazu eignet, Aufschluss über die Satzverknüpfungsqualität zu geben. Die anschließend aufgestellte Taxonomie deutscher Nebensätze wird schließlich einzig mit der Postulierung einer nicht-kanonischen Nebensatzklasse auskommen. Sie ist darüber hinaus auch in der Lage, die zahlreich vorkommenden Ausnahmefälle zu erfassen. Dies heißt konkret, dass auch etwaige Nebensätze, die sich aufgrund bestimmter Eigenschaften teilweise idiosynkratisch verhalten, einfach in die vorgeschlagene Klassifikation übernommen werden können. In diesem Zuge werde ich weiterhin zeigen, wie eine Nebensatzklassifikation auch sog. sekundären Subordinationsmerkmalen gerecht werden kann, obwohl diese sich hinsichtlich der einzelnen Nebensatzklassen nicht einheitlich verhalten. Schließlich werde ich eine theoretische Modellierung der zuvor postulierten Taxonomie vornehmen, die auf Basis der HPSG mittels Merkmalsvererbung alle möglichen Nebensatztypen zu erfassen imstande ist.
Information flows in EU policy-making are heavily dependent on personal networks, both within the Brussels sphere but also reaching outside the narrow limits of the Belgian capital. These networks develop for example in the course of formal and informal meetings or at the sidelines of such meetings. A plethora of committees at European, transnational and regional level provides the basis for the establishment of pan-European networks. By studying affiliation to those committees, basic network structures can be uncovered. These affiliation network structures can then be used to predict EU information flows, assuming that certain positions within the network are advantageous for tapping into streams of information while others are too remote and peripheral to provide access to information early enough. This study has tested those assumptions for the case of the reform of the Common Fisheries Policy for the time after 2012. Through the analysis of an affiliation network based on participation in 10 different fisheries policy committees over two years (2009 and 2010), network data for an EU-wide network of about 1300 fisheries interest group representatives and more than 200 events was collected. The structure of this network showed a number of interesting patterns, such as – not surprisingly – a rather central role of Brussels-based committees but also close relations of very specific interests to the Brussels-cluster and stronger relations between geographically closer maritime regions. The analysis of information flows then focused on access to draft EU Commission documents containing the upcoming proposal for a new basic regulation of the Common Fisheries Policy. It was first documented that it would have been impossible to officially obtain this document and that personal networks were thus the most likely sources for fisheries policy actors to obtain access to these “leaks” in early 2011. A survey of a sample of 65 actors from the initial network supported these findings: Only a very small group had accessed the draft directly from the Commission. Most respondents who obtained access to the draft had received it from other actors, highlighting the networked flow of informal information in EU politics. Furthermore, the testing of the hypotheses connecting network positions and the level of informedness indicated that presence in or connections to the Brussels sphere had both advantages for overall access to the draft document and with regard to timing. Methodologically, challenges of both the network analysis and the analysis of information flows but also their relevance for the study of EU politics have been documented. In summary, this study has laid the foundation for a different way to study EU policy-making by connecting topical and methodological elements – such as affiliation network analysis and EU committee governance – which so far have not been considered together, thereby contributing in various ways to political science and EU studies.
The life of microorganisms is characterized by two main tasks, rapid growth under conditions permitting growth and survival under stressful conditions. The environments, in which microorganisms dwell, vary in space and time. The microorganisms innovate diverse strategies to readily adapt to the regularly fluctuating environments. Phenotypic heterogeneity is one such strategy, where an isogenic population splits into subpopulations that respond differently under identical environments. Bacterial persistence is a prime example of such phenotypic heterogeneity, whereby a population survives under an antibiotic attack, by keeping a fraction of population in a drug tolerant state, the persister state. Specifically, persister cells grow more slowly than normal cells under growth conditions, but survive longer under stress conditions such as the antibiotic administrations. Bacterial persistence is identified experimentally by examining the population survival upon an antibiotic treatment and the population resuscitation in a growth medium. The underlying population dynamics is explained with a two state model for reversible phenotype switching in a cell within the population. We study this existing model with a new theoretical approach and present analytical expressions for the time scale observed in population growth and resuscitation, that can be easily used to extract underlying model parameters of bacterial persistence. In addition, we recapitulate previously known results on the evolution of such structured population under periodically fluctuating environment using our simple approximation method. Using our analysis, we determine model parameters for Staphylococcus aureus population under several antibiotics and interpret the outcome of cross-drug treatment. Next, we consider the expansion of a population exhibiting phenotype switching in a spatially structured environment consisting of two growth permitting patches separated by an antibiotic patch. The dynamic interplay of growth, death and migration of cells in different patches leads to distinct regimes in population propagation speed as a function of migration rate. We map out the region in parameter space of phenotype switching and migration rate to observe the condition under which persistence is beneficial. Furthermore, we present an extended model that allows mutation from the two phenotypic states to a resistant state. We find that the presence of persister cells may enhance the probability of resistant mutation in a population. Using this model, we explain the experimental results showing the emergence of antibiotic resistance in a Staphylococcus aureus population upon tobramycin treatment. In summary, we identify several roles of bacterial persistence, such as help in spatial expansion, development of multidrug tolerance and emergence of antibiotic resistance. Our study provides a theoretical perspective on the dynamics of bacterial persistence in different environmental conditions. These results can be utilized to design further experiments, and to develop novel strategies to eradicate persistent infections.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
Previous research has shown that high phonotactic frequencies
facilitate the production of regularly inflected verbs in English-learning
children with specific language impairment (SLI) but not with typical
development (TD). We asked whether this finding can be replicated
for German, a language with a much more complex inflectional
verb paradigm than English. Using an elicitation task, the production
of inflected nonce verb forms (3 rd person singular with -t suffix)
with either high- or low-frequency subsyllables was tested in
sixteen German-learning children with SLI (ages 4;1–5 ;1), sixteen
TD-children matched for chronological age (CA) and fourteen TD-
children matched for verbal age (VA) (ages 3;0–3 ;11). The findings
revealed that children with SLI, but not CA- or VA-children, showed
differential performance between the two types of verbs, producing
more inflectional errors when the verb forms resulted in low-frequency
subsyllables than when they resulted in high-frequency subsyllables,
replicating the results from English-learning children.
Small eye movements during fixation : the case of postsaccadic fixation and preparatory influences
(2013)
Describing human eye movement behavior as an alternating sequence of saccades and fixations turns out to be an oversimplification because the eyes continue to move during fixation. Small-amplitude saccades (e.g., microsaccades) are typically observed 1-2 times per second during fixation. Research on microsaccades came in two waves. Early studies on microsaccades were dominated by the question whether microsaccades affect visual perception, and by studies on the role of microsaccades in the process of fixation control. The lack of evidence for a unique role of microsaccades led to a very critical view on the importance of microsaccades. Over the last years, microsaccades moved into focus again, revealing many interactions with perception, oculomotor control and cognition, as well as intriguing new insights into the neurophysiological implementation of microsaccades. In contrast to early studies on microsaccades, recent findings on microsaccades were accompanied by the development of models of microsaccade generation. While the exact generating mechanisms vary between the models, they still share the assumption that microsaccades are generated in a topographically organized saccade motor map that includes a representation for small-amplitude saccades in the center of the map (with its neurophysiological implementation in the rostral pole of the superior colliculus). In the present thesis I criticize that models of microsaccade generation are exclusively based on results obtained during prolonged presaccadic fixation. I argue that microsaccades should also be studied in a more natural situation, namely the fixation following large saccadic eye movements. Studying postsaccadic fixation offers a new window to falsify models that aim to account for the generation of small eye movements. I demonstrate that error signals (visual and extra-retinal), as well as non-error signals like target eccentricity influence the characteristics of small-amplitude eye movements. These findings require a modification of a model introduced by Rolfs, Kliegl and Engbert (2008) in order to account for the generation of small-amplitude saccades during postsaccadic fixation. Moreover, I present a promising type of survival analysis that allowed me to examine time-dependent influences on postsaccadic eye movements. In addition, I examined the interplay of postsaccadic eye movements and postsaccadic location judgments, highlighting the need to include postsaccadic eye movements as covariate in the analyses of location judgments in the presented paradigm. In a second goal, I tested model predictions concerning preparatory influences on microsaccade generation during presaccadic fixation. The observation, that the preparatory set significantly influenced microsaccade rate, supports the critical model assumption that increased fixation-related activity results in a larger number of microsaccades. In the present thesis I present important influences on the generation of small-amplitude saccades during fixation. These eye movements constitute a rich oculomotor behavior which still poses many research questions. Certainly, small-amplitude saccades represent an interesting source of information and will continue to influence future studies on perception and cognition.
Automobildesigner haben als Gestaltungsexperten die Aufgabe, die Identität und damit die Werte einer Marke in Formen zu übersetzen, welche eine Vielzahl von Kunden ansprechen (Giannini & Monti, 2003; Karjalainen, 2002). Für diesen Übersetzungsprozess ist es zielführend, ästhetische Kundenbedürfnisse zu kennen, denn die Qualität einer Designlösung hängt auch davon ab, inwieweit der Designer Kundenbe-dürfnisse und damit das Designproblem richtig erfasst hat (Ulrich, 2006). Eine Grundlage hierfür entsteht durch eine erfolgreiche Designer-Nutzer-Interaktion und den Aufbau eines gemeinsamen Kontextwissens (Lee, Popovich, Blackler & Lee, 2009). Zwischen Designern und Kunden findet jedoch häufig kein direkter Austausch statt (Zeisel, 2006). Zudem belegen Befunde der Kunst- und Produktästhetikforschung, dass der Erwerb von gestalterischem Wissen und damit die Entwicklung ästhetischer Expertise mit Veränderungen der kognitiven Verarbeitung ästhetischer Objekte einhergeht, die sich in Wahrnehmung, Bewertung und Verhalten manifestieren. Damit ist auch zu erwarten, dass die Präferenzurteile von Designern und Kunden bei der ästhetischen Bewertung von Design nicht immer konvergieren. Ziel der vorliegenden Arbeit war daher die systematische Untersuchung dieser expertisebedingten Wahrnehmungs- und Bewertungsunterschiede zwischen designge-schulten und ungeschulten Personen bei der Betrachtung von Automobildesign. Damit sollten Perzeption, Verarbeitung und Bewertung von Automobildesign durch design-ungeschulte Personen transparenter gemacht und mit der Verarbeitung designgeschul-ter Personen verglichen werden, um einen Beitrag zur gemeinsamen Wissensbasis und damit einer erfolgreichen Designer-Nutzer-Interaktion zu leisten. Die theoretische Einbettung der Arbeit basierte auf dem Modell ästhetischer Erfahrung und ästheti-schen Urteilens von Leder, Belke, Oeberst und Augustin (2004), welches konkrete Annahmen zu Verarbeitungsunterschieden von ästhetischen Objekten zwischen Experten und Laien bietet, die bisher allerdings noch nicht umfassend geprüft wurden. Den ersten Schwerpunkt dieser Arbeit bildete die Untersuchung von Unter-schieden zwischen Designern und designungeschulten Rezipienten bei der Beschrei-bung und Bewertung auf dem Markt vorhandenen Fahrzeugdesigns. Dabei sollte auch geprüft werden, ob eine lexikalische Verbindung zwischen Beschreibungsattributen von Fahrzeugrezipienten und den postulierten Markenwerten von Automobilmarken hergestellt werden kann. Diesem ersten Untersuchungsanliegen wurde in zwei Studien nachgegangen: Studie I diente der Erhebung von Beschreibungsattributen mittels Triadenvergleich in Anlehnung an Kelly (1955). Es wurde geprüft, ob designgeschulte Teilnehmer produkti-ver verbalisieren, dabei anteilig mehr symbolbezogene als formbezogene Attribute generieren und innerhalb ihrer Gruppe häufiger gleiche Attribute nutzen als designun-geschulte Teilnehmer. Hierfür beschrieben 20 designgeschulte Probanden und 20 designungeschulte Probanden mit selbst gewählten Adjektiven die Unterschiede zwischen vier präsentierten Fahrzeugen. Die Gruppen nutzten dabei entgegen der Annahmen sehr ähnliche Attribute und unterschieden sich somit auch nicht in ihrer Verwendung symbolbezogener und formbezogener Attribute. Die generierten Attribute wurden mittels Prototypenansatz (Amelang & Zielinski, 2002) den ermittelten und nachfolgend kategorisierten Markenwerten von 10 Automobilherstellern zugeordnet, so dass sechs Skalen zur Erfassung der ästhetischen Wirkung von Fahrzeugen entstanden. In Studie II wurde ein diese sechs Skalen umfassender Fragebogen an einer Stichprobe von 83 Designern und Designstudierenden sowie 98 Probanden ohne Designausbildung in einer Onlinebefragung hinsichtlich Skalenkonsistenz geprüft. Außerdem wurden erste Annahmen aus dem Modell von Leder et al. (2004) abgeleitet und durch einen Vergleich der beiden Teilnehmergruppen hinsichtlich der Bewertung der vier präsentierten Fahrzeugmodelle für die Skalen mit guter interner Konsistenz (Attraktivität, Dynamik, Fortschritt, Qualität), sowie eines ästhetischen Gesamturteils, der benötigten Bewertungszeit und der Automobilaffinität überprüft. Hierbei vergaben Designstudierende und insbesondere ausgebildete Designer radikalere Bewertungen als Designlaien, benötigten mehr Zeit bei der Bewertung und waren automobilaffiner als die ungeschulten Befragungsteilnehmer. Den zweiten Schwerpunkt der Arbeit bildete eine konzeptionelle Zusammen-führung der Annahmen des Modells von Leder et al. (2004) und der Postulate zur Wirkung von Objekteigenschaften auf ästhetische Urteile (Berlyne, 1971; Martindale, 1988; Silvia, 2005b). Konkret sollte geprüft werden, welchen Einfluss marktrelevante Objekteigenschaften, wie z.B. das Ausmaß an Innovativität, auf die durch Expertise moderierte Bewertung von Design haben. In den Studien III und IV wurden hierfür systematisch bezüglich Innovativität und Balance gestufte Linienmodelle von Fahrzeu-gen präsentiert. In Studie III wurden die Modelle in einer Onlinebefragung durch 18 Designstudierende und 20 Studenten der Fahrzeugtechnik hinsichtlich Attraktivität, Innovativität und Balance bewertet. Im Einklang mit den Annahmen konnte gezeigt werden, dass sehr neuartiges Design von den designungeschulten Probanden als weniger attraktiv bewertet wird als von Betrachtern eines Designstudienganges. In Studie IV wurden neben den Ästhetikbewertungen zusätzlich das Blickverhal-ten und der affektiver Zustand der Versuchsteilnehmer in einem Messwiederholungs-design mit einer zwischengelagerten Phase elaborierter Designbewertung, in welcher der in Studie II geprüfte Fragebogen eingesetzt wurde, erhoben. An der Laborstudie nahmen je 11 Designer, Ingenieure, und Geisteswissenschaftler teil. Wiederum wurde innovatives Design von den designungeschulten Gruppen als weniger attraktiv bewertet. Dieser Unterschied reduzierte sich jedoch nach wiederholter Bewertung der Modelle. Die Manifestation expertisebedingten Blickverhaltens konnte nicht beobach-tet werden, wie auch die durch eine angenommene bessere Bewältigung einherge-hende positivere Stimmung oder höhere Zufriedenheit in der Expertengruppe. Gemeinsam mit den Befunden aus den Studien II und III wurde deutlich, dass Designausbildung und, noch ausgeprägter, Designexpertise neben einer höheren Attraktivitätsbewertung innovativen Designs auch zu einer differenzierteren Beurtei-lung von Innovativität führt. Dies wurde mit der Erweiterung des mentalen Schemas für Fahrzeuge durch die Beschäftigung mit vielfältigen Modellvarianten bereits während des Studiums interpretiert. Es wurden Hinweise auf eine stilbezogene, elaboriertere Verarbeitung von Fahrzeugdesign durch designgeschulte Betrachter beobachtet sowie eine mit Expertise einhergehende Autonomität ästhetischer Urteile als Ausdruck einer hohen ästhetischen Entwicklungsstufe (Parsons, 1987). Mit diesen bei unterschiedlichen Stichproben beobachteten, stabilen expertisebedingten Bewer-tungsunterschieden wurde eine begründete Basis für die geforderte Sensibilisierung für ästhetische Kundenbedürfnisse im Gestaltungsprozess geschaffen. Der in dieser Arbeit entwickelte Fragebogen kann hierbei für eine elaborierte Messung von Fahrzeugdesignpräferenzen, zum Vergleich der ästhetischen Wirkung mit den intendierten Markenwerten sowie für die Diskussion von Nutzereindrücken eingesetzt werden. Die Ergebnisse der vorliegenden Arbeiten tragen somit zur Erweiterung und Präzisierung des theoretischen Verständnisses von Ästhetikbewertungen bei und lassen sich gleichzeitig in die Praxis der Designausbildung und des Designprozesses übertragen.
Functional metabolism of storage carbohydrates is vital to plants and animals. The water-soluble glycogen in animal cells and the amylopectin which is the major component of water-insoluble starch granules residing in plant plastids are chemically similar as they consist of α-1,6 branched α-1,4 glucan chains. Synthesis and degradation of transitory starch and of glycogen are accomplished by a set of enzymatic activities that to some extend are also similar in plants and animals. Chain elongation, branching, and debranching are achieved by synthases, branching enzymes, and debranching enzymes, respectively. Similarly, both types of polyglucans contain low amounts of phosphate esters whose abundance varies depending on species and organs. Starch is selectively phosphorylated by at least two dikinases (GWD and PWD) at the glucosyl carbons C6 and C3 and dephosphorylated by the phosphatase SEX4 and SEX4-like enzymes. In Arabidopsis insufficiency in starch phosphorylation or dephosphorylation results in largely impaired starch turnover, starch accumulation, and often in retardation of growth. In humans the progressive neurodegenerative epilepsy, Lafora disease, is the result of a defective enzyme (laforin) that is functional equivalent to the starch phosphatase SEX4 and capable of glycogen dephosphorylation. Patients lacking laforin progressively accumulate unphysiologically structured insoluble glycogen-derived particles (Lafora bodies) in many tissues including brain. Previous results concerning the carbon position of glycogen phosphate are contradictory. Currently it is believed that glycogen is esterified exclusively at the carbon positions C2 and C3 and that the monophosphate esters, being incorporated via a side reaction of glycogen synthase (GS), lack any specific function but are rather an enzymatic error that needs to be corrected. In this study a versatile and highly sensitive enzymatic cycling assay was established that enables quantification of very small G6P amounts in the presence of high concentrations of non-target compounds as present in hydrolysates of polysaccharides, such as starch, glycogen, or cytosolic heteroglycans in plants. Following validation of the G6P determination by analyzing previously characterized starches G6P was quantified in hydrolysates of various glycogen samples and in plant heteroglycans. Interestingly, glucosyl C6 phosphate is present in all glycogen preparations examined, the abundance varying between glycogens of different sources. Additionally, it was shown that carbon C6 is severely hyperphosphorylated in glycogen of Lafora disease mouse model and that laforin is capable of removing C6 phosphate from glycogen. After enrichment of phosphoglucans from amylolytically degraded glycogen, several techniques of two-dimensional NMR were applied that independently proved the existence of 6-phosphoglucosyl residues in glycogen and confirmed the recently described phosphorylation sites C2 and C3. C6 phosphate is neither Lafora disease- nor species-, or organ-specific as it was demonstrated in liver glycogen from laforin-deficient mice and in that of wild type rabbit skeletal muscle. The distribution of 6-phosphoglucosyl residues was analyzed in glycogen molecules and has been found to be uneven. Gradual degradation experiments revealed that C6 phosphate is more abundant in central parts of the glycogen molecules and in molecules possessing longer glucan chains. Glycogen of Lafora disease mice consistently contains a higher proportion of longer chains while most short chains were reduced as compared to wild type. Together with results recently published (Nitschke et al., 2013) the findings of this work completely unhinge the hypothesis of GS-mediated phosphate incorporation as the respective reaction mechanism excludes phosphorylation of this glucosyl carbon, and as it is difficult to explain an uneven distribution of C6 phosphate by a stochastic event. Indeed the results rather point to a specific function of 6-phosphoglucosyl residues in the metabolism of polysaccharides as they are present in starch, glycogen, and, as described in this study, in heteroglycans of Arabidopsis. In the latter the function of phosphate remains unclear but this study provides evidence that in starch and glycogen it is related to branching. Moreover a role of C6 phosphate in the early stages of glycogen synthesis is suggested. By rejecting the current view on glycogen phosphate to be a stochastic biochemical error the results permit a wider view on putative roles of glycogen phosphate and on alternative biochemical ways of glycogen phosphorylation which for many reasons are likely to be mediated by distinct phosphorylating enzymes as it is realized in starch metabolism of plants. Better understanding of the enzymology underlying glycogen phosphorylation implies new possibilities of Lafora disease treatment.
Landslides are one of the biggest natural hazards in Georgia, a mountainous country in the Caucasus. So far, no systematic monitoring and analysis of the dynamics of landslides in Georgia has been made. Especially as landslides are triggered by extrinsic processes, the analysis of landslides together with precipitation and earthquakes is challenging. In this thesis I describe the advantages and limits of remote sensing to detect and better understand the nature of landslide in Georgia. The thesis is written in a cumulative form, composing a general introduction, three manuscripts and a summary and outlook chapter. In the present work, I measure the surface displacement due to active landslides with different interferometric synthetic aperture radar (InSAR) methods. The slow landslides (several cm per year) are well detectable with two-pass interferometry. In same time, the extremely slow landslides (several mm per year) could be detected only with time series InSAR techniques. I exemplify the success of InSAR techniques by showing hitherto unknown landslides, located in the central part of Georgia. Both, the landslide extent and displacement rate is quantified. Further, to determine a possible depth and position of potential sliding planes, inverse models were developed. Inverse modeling searches for parameters of source which can create observed displacement distribution. I also empirically estimate the volume of the investigated landslide using displacement distributions as derived from InSAR combined with morphology from an aerial photography. I adapted a volume formula for our case, and also combined available seismicity and precipitation data to analyze potential triggering factors. A governing question was: What causes landslide acceleration as observed in the InSAR data? The investigated area (central Georgia) is seismically highly active. As an additional product of the InSAR data analysis, a deformation area associated with the 7th September Mw=6.0 earthquake was found. Evidences of surface ruptures directly associated with the earthquake could not be found in the field, however, during and after the earthquake new landslides were observed. The thesis highlights that deformation from InSAR may help to map area prone landslides triggering by earthquake, potentially providing a technique that is of relevance for country wide landslide monitoring, especially as new satellite sensors will emerge in the coming years.
Scalable compatibility for embedded real-time components via language progressive timed automata
(2013)
Die korrekte Komposition individuell entwickelter Komponenten von eingebetteten Realzeitsystemen ist eine Herausforderung, da neben funktionalen Eigenschaften auch nicht funktionale Eigenschaften berücksichtigt werden müssen. Ein Beispiel hierfür ist die Kompatibilität von Realzeiteigenschaften, welche eine entscheidende Rolle in eingebetteten Systemen spielen. Heutzutage wird die Kompatibilität derartiger Eigenschaften in einer aufwändigen Integrations- und Konfigurationstests am Ende des Entwicklungsprozesses geprüft, wobei diese Tests im schlechtesten Fall fehlschlagen. Aus diesem Grund wurde eine Zahl an formalen Verfahren Entwickelt, welche eine frühzeitige Analyse von Realzeiteigenschaften von Komponenten erlauben, sodass Inkompatibilitäten von Realzeiteigenschaften in späteren Phasen ausgeschlossen werden können. Existierenden Verfahren verlangen jedoch, dass eine Reihe von Bedingungen erfüllt sein muss, welche von realen Systemen nur schwer zu erfüllen sind, oder aber, die verwendeten Analyseverfahren skalieren nicht für größere Systeme. In dieser Arbeit wird ein Ansatz vorgestellt, welcher auf dem formalen Modell des Timed Automaton basiert und der keine Bedingungen verlangt, die von einem realen System nur schwer erfüllt werden können. Der in dieser Arbeit vorgestellte Ansatz enthält ein Framework, welches eine modulare Analyse erlaubt, bei der ausschließlich miteinender kommunizierende Komponenten paarweise überprüft werden müssen. Somit wird eine skalierbare Analyse von Realzeiteigenschaften ermöglicht, die keine Bedingungen verlangt, welche nur bedingt von realen Systemen erfüllt werden können.
Die Kartierung planetarer Körper stellt ein wesentliches Mittel der raumfahrtgestützten Exploration der Himmelskörper dar. Aktuell kommen zur Erstellung der planetaren Karten Geo-Informationssysteme (GIS) zum Einsatz. Ziel dieser Arbeit ist es, eine GIS-orientierte Prozesskette (Planetary Mapping System (PMS)) zu konzipieren, mit dem Schwerpunkt geologische und geomorphologische Karten planetarer Oberflächen einheitlich durchführen zu können und nachhaltig zugänglich zu machen.