Refine
Has Fulltext
- yes (263) (remove)
Year of publication
- 2014 (263) (remove)
Document Type
- Postprint (102)
- Doctoral Thesis (89)
- Monograph/Edited Volume (22)
- Part of Periodical (13)
- Preprint (13)
- Article (11)
- Master's Thesis (5)
- Bachelor Thesis (3)
- Conference Proceeding (3)
- Habilitation Thesis (2)
Is part of the Bibliography
- yes (263) (remove)
Keywords
- prevention (8)
- Gewalt (6)
- Kriminalität (6)
- Nachhaltigkeit (6)
- Prävention (6)
- Rechtsextremismus (6)
- crime (6)
- right-wing extremism (6)
- sustainability (6)
- violence (6)
Institute
- Institut für Chemie (41)
- Institut für Physik und Astronomie (24)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Humanwissenschaftliche Fakultät (19)
- Institut für Biochemie und Biologie (19)
- Institut für Mathematik (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (15)
- Institut für Geowissenschaften (12)
- Referat für Presse- und Öffentlichkeitsarbeit (8)
- Sozialwissenschaften (8)
- Wirtschaftswissenschaften (8)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Anglistik und Amerikanistik (5)
- Department Linguistik (4)
- Extern (4)
- Institut für Germanistik (4)
- Institut für Informatik und Computational Science (4)
- Institut für Romanistik (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Bürgerliches Recht (3)
- Institut für Ernährungswissenschaft (3)
- Institut für Jüdische Studien und Religionswissenschaft (3)
- Philosophische Fakultät (3)
- Öffentliches Recht (3)
- Department Erziehungswissenschaft (2)
- Department Psychologie (2)
- Historisches Institut (2)
- Institut für Slavistik (2)
- Klassische Philologie (2)
- Kommunalwissenschaftliches Institut (2)
- MenschenRechtsZentrum (2)
- Universitätsbibliothek (2)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (2)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (2)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (1)
- Department Musik und Kunst (1)
- Dezernat 2: Studienangelegenheiten (1)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Institut für Künste und Medien (1)
- Institut für Philosophie (1)
- Strukturbereich Bildungswissenschaften (1)
- Strukturbereich Kognitionswissenschaften (1)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (1)
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
This study investigates whether number dissimilarities on subject and object DPs facilitate the comprehension of subject-and object-extracted centre-embedded relative clauses in children with Grammatical Specific Language Impairment (G-SLI). We compared the performance of a group of English-speaking children with G-SLI (mean age: 12; 11) with that of two groups of younger typically developing (TD) children, matched on grammar and receptive vocabulary, respectively. All groups were more accurate on subject-extracted relative clauses than object-extracted ones and, crucially, they all showed greater accuracy for sentences with dissimilar number features (i.e., one singular, one plural) on the head noun and the embedded DP. These findings are interpreted in the light of current psycholinguistic models of sentence comprehension in TD children and provide further insight into the linguistic nature of G-SLI.
Die Tränen des Henri Quatre
(2014)
Der vorliegende zweite Band mit Texten aus dem Nachlass von Dieter Adelmann (1936–2008) macht eine der letzten Arbeiten, die ihr Verfasser anlässlich seines 70. Geburtstags aus der Hand gegeben hat, für ein breites Publikum zugänglich. Der Essay bildet dabei einen Abschluss von Adelmanns ersten, im weitesten Sinne philosophischen Publikationen, die nach der Doktorarbeit (Bd. 1 der Nachlasspublikationen) entstanden sind. Ausgehend von einer Beobachtung in Heinrich Manns epochaler Doppelromanbiografie des navarresisch-französischen protestantischen Königs Henri IV. untersucht Adelmann die Tränen- und Todesmetaphorik in dem Romanwerk aus den 1930er Jahren. Von hier aus wird eine Brücke zur literaturwissenschaftlichen Ästhetik und Erkenntnistheorie Walter Benjamins geschlagen, mit der sich Adelmann bereits in seinen Arbeiten zur Politkunst Klaus Staecks beschäftigt hatte. Dabei deckt der Verfasser Fehlinterpretationen der Benjaminschen Theorien durch dessen Freund und Nachlassverwalter Theodor W. Adorno auf.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
Fusionen stellen einen zentralen Baustein der Industrieökonomik dar. In diesem Buch wird der Frage nachgegangen, welchen Einfluss die räumliche Dimension auf eine Fusion ausübt. Dabei wird ein Grundmodell entwickelt und über dieses hinaus eine Vielzahl Erweiterungen präsentiert. Der Leser erhält somit die Möglichkeit ein tiefes Verständnis für Fusionen bei räumlichem Wettbewerb zu erlangen.
Die Konferenz „International Conference for the 10th Anniversary of the Institute of Comparative Law” hat am 24. Mai 2013 in Szeged stattgefunden. Im Rahmen der viersprachigen Konferenz haben mehr als dreißig Teilnehmer ihre Forschungsergebnisse präsentiert. Der Essay von Zoltán Péteri blickt auf die Disziplin aus der Perspektive der Wissenschaftsgeschichte. Katalin Kelemen und Balázs Fekete gehen in ihrem Aufsatz der Frage nach, welchen Weg die Versuche der Klassifikation der Rechtssysteme von Osteuropa in der späten Phase der Umbrüche der 1980/90er Jahren genommen haben. Die historische Betrachtungsweise mit Bezug auf Rechtsgeschichte und Rechtsvergleichung spiegelt sich auch in anderen Essays wider, vor allem in den Aufsätzen von Szilvia Bató, Magdolna Gedeon und Béla Szabó P. sowie auch in den Aufsätzen von Péter Mezei und Tünde Szűcs. Attila Badó analysiert die Rechtsvergleichung aus der Sicht des Rechts, der Soziologie und der Politikwissenschaft anhand von Untersuchungen über das Sanktionsystem der Richter in den USA. Diese politikwissenschaftliche Seite wird auch in den Aufsätzen über die aktuellen Fragen der europäischen Integration von Carine Guemar und Laureline Congnard betont. Eine Reihe von Aufsätzen behandeln die konventionelle normative Komparatistik auf dem Gebiet des Verfassungsrechts (Jordane Arlettaz und Péter Kruzslicz), Gesellschaftsrechts (Kitti Bakos-Kovács), Urheberrechts (Dóra Hajdú) und Steuerrechts (Judit Jacsó). Daneben bilden eine weitere Gruppe die Aufsätze von János Bóka und Erzsébet Csatlós, die die Verwendung der vergleichenden Methode in der Praxis der Rechtsprechung untersuchen. Die Rechtsvergleichung ist eine sich dynamisch entwickelnde Disziplin. Die Konferenz und dieser Band dienen nicht nur der Würdigung der bisherigen Arbeit des Instituts für Rechtsvergleichung, sondern zeigen gleichzeitig neue Ziele auf. Die wichtigsten Grundsätze bleiben aber fest verankert auch in einem sich stets verändernden rechtlichen und geistigen Umfeld. Das Motto des Instituts lautet „instruere et docere omnes qui edoceri desiderant“ – „alle lehren, die lernen wollen.“ Auch in den folgenden Jahrzehnten werden uns der Wille des Lernens und Lehrens, die Freiheit der Forschung sowie die Übertragung und Weiterentwicklung der ungarischen wie globalen Rechtskultur leiten.
This reference paper describes the sampling and contents of the IZA Evaluation Dataset Survey and outlines its vast potential for research in labor economics. The data have been part of a unique IZA project to connect administrative data from the German Federal Employment Agency with innovative survey data to study the out-mobility of individuals to work. This study makes the survey available to the research community as a Scientific Use File by explaining the development, structure, and access to the data. Furthermore, it also summarizes previous findings with the survey data.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
Biosensors for the detection of benzaldehyde and g-aminobutyric acid (GABA) are reported using aldehyde oxidoreductase PaoABC from Escherichia coli immobilized in a polymer containing bound low potential osmium redox complexes. The electrically connected enzyme already electrooxidizes benzaldehyde at potentials below −0.15 V (vs. Ag|AgCl, 1 M KCl). The pH-dependence of benzaldehyde oxidation can be strongly influenced by the ionic strength. The effect is similar with the soluble osmium redox complex and therefore indicates a clear electrostatic effect on the bioelectrocatalytic efficiency of PaoABC in the osmium containing redox polymer. At lower ionic strength, the pH-optimum is high and can be switched to low pH-values at high ionic strength. This offers biosensing at high and low pH-values. A “reagentless” biosensor has been formed with enzyme wired onto a screen-printed electrode in a flow cell device. The response time to addition of benzaldehyde is 30 s, and the measuring range is between 10–150 µM and the detection limit of 5 µM (signal to noise ratio 3:1) of benzaldehyde. The relative standard deviation in a series (n = 13) for 200 µM benzaldehyde is 1.9%. For the biosensor, a response to succinic semialdehyde was also identified. Based on this response and the ability to work at high pH a biosensor for GABA is proposed by coimmobilizing GABA-aminotransferase (GABA-T) and PaoABC in the osmium containing redox polymer.
Zwischen den Juristischen Fakultäten der Universität Szeged und der Universität Potsdam besteht seit etlichen Jahren eine fruchtbare Kooperation in der Lehre. Durch sie entwickelt sich allmählich eine wissenschaftliche Zusammenarbeit. Gemeinsame Konferenzen und Publikationen sind dafür ein Beweis. Der vorliegende Band ist das Resultat dieser Kooperation. Der Buchtitel kennzeichnet das Engagement der ungarischen und der deutschen Juristen sowie die gemeinsamen Werte, welche der europäischen Rechtsentwicklung im 21. Jahrhundert zugrunde liegen und die Dogmatik der verschiedenen Rechtsgebiete verknüpfen. Die einzelnen Beiträge legen dabei Zeugnis ab von der ganzen Breite der Interessen der ungarischen und deutschen Juristen.
DNA origami nanostructures allow for the arrangement of different functionalities such as proteins, specific DNA structures, nanoparticles, and various chemical modifications with unprecedented precision. The arranged functional entities can be visualized by atomic force microscopy (AFM) which enables the study of molecular processes at a single-molecular level. Examples comprise the investigation of chemical reactions, electron-induced bond breaking, enzymatic binding and cleavage events, and conformational transitions in DNA. In this paper, we provide an overview of the advances achieved in the field of single-molecule investigations by applying atomic force microscopy to functionalized DNA origami substrates.
On the role of fluoro-substituted nucleosides in DNA radiosensitization for tumor radiation therapy
(2014)
Gemcitabine (2′,2′-difluorocytidine) is a well-known radiosensitizer routinely applied in concomitant chemoradiotherapy. During irradiation of biological media with high-energy radiation secondary low-energy (<10 eV) electrons are produced that can directly induce chemical bond breakage in DNA by dissociative electron attachment (DEA). Here, we investigate and compare DEA to the three molecules 2′-deoxycytidine, 2′-deoxy-5-fluorocytidine, and gemcitabine. Fluorination at specific molecular sites, i.e., nucleobase or sugar moiety, is found to control electron attachment and subsequent dissociation pathways. The presence of two fluorine atoms at the sugar ring results in more efficient electron attachment to the sugar moiety and subsequent bond cleavage. For the formation of the dehydrogenated nucleobase anion, we obtain an enhancement factor of 2.8 upon fluorination of the sugar, whereas the enhancement factor is 5.5 when the nucleobase is fluorinated. The observed fragmentation reactions suggest enhanced DNA strand breakage induced by secondary electrons when gemcitabine is incorporated into DNA.
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described.
Surface displacement at volcanic edifices is related to subsurface processes associated with magma movements, fluid transfers within the volcano edifice and gravity-driven deformation processes. Understanding of associated ground displacements is of importance for assessment of volcanic hazards. For example, volcanic unrest is often preceded by surface uplift, caused by magma intrusion and followed by subsidence, after the withdrawal of magma. Continuous monitoring of the surface displacement at volcanoes therefore might allow the forecasting of upcoming eruptions to some extent. In geophysics, the measured surface displacements allow the parameters of possible deformation sources to be estimated through analytical or numerical modeling. This is one way to improve the understanding of subsurface processes acting at volcanoes. Although the monitoring of volcanoes has significantly improved in the last decades (in terms of technical advancements and number of monitored volcanoes), the forecasting of volcanic eruptions remains puzzling. In this work I contribute towards the understanding of the subsurface processes at volcanoes and thus to the improvement of volcano eruption forecasting. I have investigated the displacement field of Llaima volcano in Chile and of Tendürek volcano in East Turkey by using synthetic aperture radar interferometry (InSAR). Through modeling of the deformation sources with the extracted displacement data, it was possible to gain insights into potential subsurface processes occurring at these two volcanoes that had been barely studied before. The two volcanoes, although of very different origin, composition and geometry, both show a complexity of interacting deformation sources. At Llaima volcano, the InSAR technique was difficult to apply, due to the large decorrelation of the radar signal between the acquisition of images. I developed a model-based unwrapping scheme, which allows the production of reliable displacement maps at the volcano that I used for deformation source modeling. The modeling results show significant differences in pre- and post-eruptive magmatic deformation source parameters. Therefore, I conjecture that two magma chambers exist below Llaima volcano: a post-eruptive deep one and a shallow one possibly due to the pre-eruptive ascent of magma. Similar reservoir depths at Llaima have been confirmed by independent petrologic studies. These reservoirs are interpreted to be temporally coupled. At Tendürek volcano I have found long-term subsidence of the volcanic edifice, which can be described by a large, magmatic, sill-like source that is subject to cooling contraction. The displacement data in conjunction with high-resolution optical images, however, reveal arcuate fractures at the eastern and western flank of the volcano. These are most likely the surface expressions of concentric ring-faults around the volcanic edifice that show low magnitudes of slip over a long time. This might be an alternative mechanism for the development of large caldera structures, which are so far assumed to be generated during large catastrophic collapse events. To investigate the potential subsurface geometry and relation of the two proposed interacting sources at Tendürek, a sill-like magmatic source and ring-faults, I have performed a more sophisticated numerical modeling approach. The optimum source geometries show, that the size of the sill-like source was overestimated in the simple models and that it is difficult to determine the dip angle of the ring-faults with surface displacement data only. However, considering physical and geological criteria a combination of outward-dipping reverse faults in the west and inward-dipping normal faults in the east seem to be the most likely. Consequently, the underground structure at the Tendürek volcano consists of a small, sill-like, contracting, magmatic source below the western summit crater that causes a trapdoor-like faulting along the ring-faults around the volcanic edifice. Therefore, the magmatic source and the ring-faults are also interpreted to be temporally coupled. In addition, a method for data reduction has been improved. The modeling of subsurface deformation sources requires only a relatively small number of well distributed InSAR observations at the earth’s surface. Satellite radar images, however, consist of several millions of these observations. Therefore, the large amount of data needs to be reduced by several orders of magnitude for source modeling, to save computation time and increase model flexibility. I have introduced a model-based subsampling approach in particular for heterogeneously-distributed observations. It allows a fast calculation of the data error variance-covariance matrix, also supports the modeling of time dependent displacement data and is, therefore, an alternative to existing methods.
Genossenschaften wirken auf manche wie ein angestaubtes Relikt aus der Vergangenheit. Das eingetrübte Image überrascht. Denn Genossenschaften haben sich immer wieder als besonders krisenfest erwiesen und längst auch als erfolgreiches Zukunftsmodell entpuppt. Der stetige Zuwachs an Neugründungen, die steigenden Mitgliederzahlen und die ständige Ausweitung der Aktionsfelder bestätigen die hohe Attraktivität. Dem entspricht eine enorme Einsatzbreite der Genossenschaftsidee. Sie reicht von Agrargenossenschaften über Produktionsgenossenschaften in Handel, Handwerk und Gewerbe bis hin zu sehr modernen Bereichen etwa der neuen Informations- und Kommunikationstechnologien. In all diesen und vielen anderen Segmenten finden sich variantenreiche genossenschaftliche Gestaltungsoptionen nach Maximen wie Selbsthilfe, Solidarität, Bürgerengagement, Partizipation, Mitglieder- und Gemeinwohlorientierung. Inzwischen lockt die hohe Anziehungskraft der Genossenschaftsidee auch die Kommunen. Angestoßen durch gesetzgeberische Impulse erleben die Genossenschaften auf der kommunalen Ebene derzeit bundesweit einen richtigen Aufschwung. Die Aufwertung erweitert die Überlegungen zur Gewährleistung und Optimierung kommunaler Leistungserbringung um eine wichtige Gestaltungsvariante, nimmt aber den Kommunen die Auswahlentscheidung nicht ab. Denn wie bei allen Organisationsentscheidungen ist vor dem Rückgriff auf genossenschaftliche Organisationsformen in jedem Einzelfall eine nüchterne aufgaben-, sach- und situationsbezogene Vergleichsanalyse geboten, die den Entscheidungsträgern spezifische Kenntnisse und detaillierte Fachkompetenz abverlangt. Die 19. Fachtagung des KWI diskutiert rechtliche Rahmenbedingungen und normative Direktiven, praktische Erfahrungen, Einsatzfelder, Erfolgsbedingungen und Fallstricke in der Praxis.
Diffusion of finite-size particles in two-dimensional channels with random wall configurations
(2014)
Diffusion of chemicals or tracer molecules through complex systems containing irregularly shaped channels is important in many applications. Most theoretical studies based on the famed Fick–Jacobs equation focus on the idealised case of infinitely small particles and reflecting boundaries. In this study we use numerical simulations to consider the transport of finite-size particles through asymmetrical two-dimensional channels. Additionally, we examine transient binding of the molecules to the channel walls by applying sticky boundary conditions. We consider an ensemble of particles diffusing in independent channels, which are characterised by common structural parameters. We compare our results for the long-time effective diffusion coefficient with a recent theoretical formula obtained by Dagdug and Pineda [J. Chem. Phys., 2012, 137, 024107].
Catalytic bio–chemo and bio–bio tandem oxidation reactions for amide and carboxylic acid synthesis
(2014)
A catalytic toolbox for three different water-based one-pot cascades to convert aryl alcohols to amides and acids and cyclic amines to lactams, involving combination of oxidative enzymes (monoamine oxidase, xanthine dehydrogenase, galactose oxidase and laccase) and chemical oxidants (TBHP or CuI(cat)/H2O2) at mild temperatures, is presented. Mutually compatible conditions were found to afford products in good to excellent yields.
Obwohl in den unionalen Verträgen bis heute keine Vorschrift bezüglich einer Staatshaftung der Mitgliedstaaten für Entscheidungen ihrer Gerichte existiert, hat der Gerichtshof der Europäischen Union (EuGH) in einer Reihe von Entscheidungen eine solche Haftung entwickelt und präzisiert. Die vorliegende Arbeit analysiert eingehend diese Rechtsprechung mitsamt den sich daraus ergebenden facettenreichen Rechtsfragen. Im ersten Kapitel widmet sich die Arbeit der historischen Entwicklung der unionsrechtlichen Staatshaftung im Allgemeinen, ausgehend von dem bekannten Francovich-Urteil aus dem Jahr 1991. Sodann werden im zweiten Kapitel die zur Haftung für judikatives Unrecht grundlegenden Entscheidungen in den Rechtssachen Köbler und Traghetti vorgestellt. In dem sich anschließenden dritten Kapitel wird der Rechtscharakter der unionsrechtlichen Staatshaftung – einschließlich der Frage einer Subsidiarität des unionsrechtlichen Anspruchs gegenüber bestehenden nationalen Staatshaftungsansprüchen – untersucht. Das vierte Kapitel widmet sich der Frage, ob eine unionsrechtliche Staatshaftung für judikatives Unrecht prinzipiell anzuerkennen ist, wobei die wesentlichen für und gegen eine solche Haftung sprechenden Argumente ausführlich behandelt und bewertet werden. Im fünften Kapitel werden die im Zusammenhang mit den unionsrechtlichen Haftungsvoraussetzungen stehenden Probleme der Haftung für letztinstanzliche Gerichtsentscheidungen detailliert erörtert. Zugleich wird der Frage nachgegangen, ob eine Haftung für fehlerhafte unterinstanzliche Gerichtsentscheidungen zu befürworten ist. Das sechste Kapitel befasst sich mit der Ausgestaltung der unionsrechtlichen Staatshaftung für letztinstanzliche Gerichtsentscheidungen durch die Mitgliedstaaten, wobei u.a. zur Anwendbarkeit der deutschen Haftungsprivilegien bei judikativem Unrecht auf den unionsrechtlichen Staatshaftungsanspruch Stellung genommen wird. Im letzten Kapitel wird der Frage nachgegangen, ob der EuGH überhaupt über eine Kompetenz zur Schaffung der Staatshaftung für letztinstanzliche Gerichtsentscheidungen verfügte. Abschließend werden die wichtigsten Ergebnisse der Arbeit präsentiert und ein Ausblick auf weitere mögliche Auswirkungen und Entwicklungen der unionsrechtlichen Staatshaftung für judikatives Unrecht gegeben.
Die vorliegende Arbeit behandelt Untersuchungen zum Einfluss ionischer Flüssigkeiten sowohl auf den Rekombinationsprozess photolytisch generierter Lophylradikale als auch auf die photoinduzierte Polymerisation. Im Fokus standen hierbei pyrrolidiniumbasierte ionische Flüssigkeiten sowie polymerisierbare imidazoliumbasierte ionische Flüssigkeiten. Mittels UV-Vis-Spektroskopie wurde in den ionischen Flüssigkeiten im Vergleich zu ausgewählten organischen Lösungsmitteln die Rekombinationskinetik der aus o-Cl-HABI photolytisch generierten Lophylradikale bei unterschiedlichen Temperaturen verfolgt und die Geschwindigkeitskonstanten der Radikalrekombination bestimmt. Die Charakterisierung des Rekombinationsprozesses erfolgt dabei insbesondere unter Verwendung der mittels Eyring-Gleichung ermittelten Aktivierungsparameter. Hierbei konnte gezeigt werden, dass die Rekombination der Lophylradikale in den ionischen Flüssigkeiten im Gegensatz zu den organischen Lösungsmitteln zu einem großen Anteil innerhalb des Lösungsmittelkäfigs erfolgt. Weiterhin wurden für den Einsatz von o-Cl-HABI als Radikalbildner in den photoinduzierten Polymerisationen mehrere mögliche Co-Initiatoren über photokalorimetrische Messungen untersucht. Hierbei wurde auch ein neuer Aspekt zur Kettenübertragung vom Lophylradikal auf den heterocyclischen Co-Initiator vorgestellt. Darüber hinaus wurden photoinduzierte Polymerisationen unter Einsatz eines Initiatorsystems, bestehend aus o-Cl-HABI als Radikalbildner und einem heterocyclischen Co-Initiator, in den ionischen Flüssigkeiten untersucht. Diese Untersuchungen beinhalten zum einen photokalorimetrische Messungen der photoinduzierten Polymerisation von polymerisierbaren imidazoliumbasierten ionischen Flüssigkeiten. Zum anderen wurden Untersuchungen zur photoinduzierten Polymerisation von Methylmethacrylat in pyrrolidiniumbasierten ionischen Flüssigkeiten durchgeführt. Dabei wurden Einflussparameter wie Zeit, Temperatur, Viskosität, Lösungsmittelkäfigeffekt und die Alkylkettenlänge am Kation der ionischen Flüssigkeiten auf die Ausbeuten und Molmassen sowie Molmassenverteilungen der Polymere hin untersucht.
Der Umgang mit der musikalischen Fachsprache wird in den meisten Lehrplänen für den Musikunterricht der Sekundarstufe I gefordert. Allerdings fehlt nicht nur in den Lehrplänen, sondern auch in der musikdidaktischen Literatur eine inhaltliche Ausgestaltung dieser Forderung. Über Inhalt, Umfang und Ziel der in der Schule anzuwendenden musikalischen Fachsprache herrscht daher keine Klarheit. Empirische Untersuchungen zu den sprachlichen Inhalten im Musikunterricht liegen ebenfalls nicht vor. Auch in vielen anderen Unterrichtsfächern ist die Forschungslage die sprachlichen Inhalte betreffend überschaubar. Mit der Verwendung von Sprache sind jedoch nicht nur Kommunikationsprozesse verbunden, sondern gleichzeitig Lernprozesse innerhalb der Sprache, von der Wortschatzerweiterung bis zur Herstellung von inhaltlich-thematischen Zusammenhängen. Diese Lernprozesse werden beeinflusst von der Wortwahl der Lernenden und Lehrenden. Die Wortwahl der Lernenden lässt gleichzeitig einen Schluss zu auf den Stand des Wissens und dessen Vernetzung. Auf dieser Basis ist der sprachliche Inhalt des Musikunterrichtes der Gegenstand der vorgelegten Arbeit. Ziel der Studie war herauszu¬finden, inwieweit es gelingen kann, durch die Art und Weise des Einsatzes und den Umfang von Fachsprache im Musikunterricht Lernprozesse effektiver und erfolgreicher zu gestalten und besser an Gegenwarts- und Zukunftsbedürfnissen der Lernenden auszurichten.
The Babylonian Talmud (BT) attributes the idea of committing a transgression for the sake of God to R. Nahman b. Isaac (RNBI). RNBI's statement appears in two parallel sugyot in the BT (Nazir 23a; Horayot 10a). Each sugya has four textual witnesses. By comparing these textual witnesses, this paper will attempt to reconstruct the sugya's earlier (or, what some might term, original) dialectical form, from which the two familiar versions of the text in Nazir and Horayot evolved. This article reveals the specific ways in which, value-laden conceptualizations have a major impact on the Talmud's formulation, as we know it today.
עבירה לשמה
(2014)
A Transgression for the Sake of God -‘Averah li-shmah: A Tale of a Radical Idea in Talmudic Literature
All cultures, religions, and ethical or legal systems struggle with the role intention plays in evaluating actions. The Talmud compellingly elaborates on the notion of intention through the radical concept that “A sin committed for the sake of God [averah li-shmah] is greater than a commandment fulfilled not for the sake of God [mi-mizvah she-lo li-shmah].” The Babylonian Talmud attributes this concept—which challenges one of rabbinic Judaism’s most fundamental dogmas, the obligation to fulfill the commandments and avoid sin—to R. Nahman b. Isaac (RNBI), a renowned 4th century Amora. Considering the normative character of the rabbinic culture in which Halakhah (Jewish religious law) plays such a central role, this concept, seems almost like a foreign body in the Talmudic corpus.
The study focuses on the linguistic stratum of RNBI’s statement. By tracking the development of the meanings and uses of the word ‘li-shmah’ the research locates RNBI’s statement as part of the broader Talmudic discourse evaluating two levels of performing religious actions ‘li-shmah/she-lo li-shmah’. Since we wish to explain the word ‘li-shmah’ consistently both times it appears in the statement, the best translation would be ‘for the sake of God’. This translation is based on the linguistic connection between the word ‘li-shmah’ and the term ‘le-shem shamayim’ (for the sake of God) that appears in several rabbinic sources. This linguistic connection is also the key to identifying the possible root of RNBI’s concept. RNBI bolsters his idea by quoting a verse about Jael, thus implying that Jael sinned for the sake of God. The research describes at least five statements in Sages’ Literature that attribute sins for the sake of God to other biblical figures, all the while using the term ‘le-shem shamim’. Therefore we may presume that RNBI’s concept has evolved from the exegetical notion of attributing sin for the sake of God to biblical figures.
To understand the way RNBI’s statement was accepted in Talmudic culture, we must explore the textual witnesses to the literary frame of RNBI’s statement: the Talmudic sugya (Nazir 23a; Horayot 10b). We possess five versions of the sugya’s dialectical structure. Comparison of these versions, allows us to reconstruct the earlier dialectical structure, from which the familiar versions developed. The radical potential of RNBI’s statement led to cultural activity, in the transmission of the sugya, in an effort to mitigate it. This activity is reflected in late additions to the sugya identified by our research—which should be viewed as a process of self-censorship for ideological reasons.
This research explores a fundamental issue in rabbinic world: the immanent contradiction between law and intention. The research depicts in detail the movement of a radical idea from the margins culture to mainstream - in this case into the Babylonian Talmud. Therefore, the findings of this research provide substantial insight into our understanding of the interpretive process and of conceptual adaptation in rabbinic culture.
Two of a kind?
(2014)
School attacks are attracting increasing attention in aggression research. Recent systematic analyses provided new insights into offense and offender characteristics. Less is known about attacks in institutes of higher education (e.g., universities). It is therefore questionable whether the term “school attack” should be limited to institutions of general education or could be extended to institutions of higher education. Scientific literature is divided in distinguishing or unifying these two groups and reports similarities as well as differences. We researched 232 school attacks and 45 attacks in institutes of higher education throughout the world and conducted systematic comparisons between the two groups. The analyses yielded differences in offender (e.g., age, migration background) and offense characteristics (e.g., weapons, suicide rates), and some similarities (e.g., gender). Most differences can apparently be accounted for by offenders’ age and situational influences. We discuss the implications of our findings for future research and the development of preventative measures.
Leaking comprises observable behavior or statements that signal intentions of committing a violent offense and is considered an important warning sign for school shootings. School staff who are confronted with leaking have to assess its seriousness and react appropriately - a difficult task, because knowledge about leaking is sparse. The present study, therefore, examined how frequently leaking occurs in schools and how teachers identify leaking and respond to it. To achieve this aim, we informed teachers from eight schools in Germany about the definition of leaking and other warning signs and risk factors for school shootings in a one-hour information session. Teachers were then asked to report cases of leaking over a six- to nine-month period and to answer a questionnaire on leaking and its treatment after the information session and six to nine months later. Our results suggest that leaking is a relevant problem in German schools. Teachers mostly rated the information session positively and benefited in several aspects (e.g. reported more perceived courses of action or improved knowledge about leaking), but also expressed a constant need for support. Our findings highlight teachers' needs for further support and training and may be used in the planning of prevention measures for school shootings.
Background: Doping attitude is a key variable in predicting athletes' intention to use forbidden performance enhancing drugs. Indirect reaction-time based attitude tests, such as the implicit association test, conceal the ultimate goal of measurement from the participant better than questionnaires. Indirect tests are especially useful when socially sensitive constructs such as attitudes towards doping need to be described. The present study serves the development and validation of a novel picture-based brief implicit association test (BIAT) for testing athletes' attitudes towards doping in sport. It shall provide the basis for a transnationally compatible research instrument able to harmonize anti-doping research efforts.
Method: Following a known-group differences validation strategy, the doping attitudes of 43 athletes from bodybuilding (representative for a highly doping prone sport) and handball (as a contrast group) were compared using the picture-based doping-BIAT. The Performance Enhancement Attitude Scale (PEAS) was employed as a corresponding direct measure in order to additionally validate the results.
Results: As expected, in the group of bodybuilders, indirectly measured doping attitudes as tested with the picture-based doping-BIAT were significantly less negative (eta(2) = .11). The doping-BIAT and PEAS scores correlated significantly at r = .50 for bodybuilders, and not significantly at r = .36 for handball players. There was a low error rate (7%) and a satisfactory internal consistency (r(dagger dagger) = .66) for the picture-based doping-BIAT.
Conclusions: The picture-based doping-BIAT constitutes a psychometrically tested method, ready to be adopted by the international research community. The test can be administered via the internet. All test material is available "open source". The test might be implemented, for example, as a new effect-measure in the evaluation of prevention programs.
Background: Knowing and, if necessary, altering competitive athletes' real attitudes towards the use of banned performance-enhancing substances is an important goal of worldwide doping prevention efforts. However athletes will not always be willing to reporting their real opinions. Reaction time-based attitude tests help conceal the ultimate goal of measurement from the participant and impede strategic answering. This study investigated how well a reaction time-based attitude test discriminated between athletes who were doping and those who were not. We investigated whether athletes whose urine samples were positive for at least one banned substance (dopers) evaluated doping more favorably than clean athletes (non-dopers).
Methods: We approached a group of 61 male competitive bodybuilders and collected urine samples for biochemical testing. The pictorial doping Brief Implicit Association Test (BIAT) was used for attitude measurement. This test quantifies the difference in response latencies (in milliseconds) to stimuli representing related concepts (i.e. doping-dislike/like-[health food]).
Results: Prohibited substances were found in 43% of all tested urine samples. Dopers had more lenient attitudes to doping than non-dopers (Hedges's g = -0.76). D-scores greater than -0.57 (CI95 = -0.72 to -0.46) might be indicative of a rather lenient attitude to doping. In urine samples evidence of administration of combinations of substances, complementary administration of substances to treat side effects and use of stimulants to promote loss of body fat was common.
Conclusion: This study demonstrates that athletes' attitudes to doping can be assessed indirectly with a reaction time-based test, and that their attitudes are related to their behavior. Although bodybuilders may be more willing to reveal their attitude to doping than other athletes, these results still provide evidence that the pictorial doping BIAT may be useful in athletes from other sports, perhaps as a complementary measure in evaluations of the effectiveness of doping prevention interventions.
Previous studies on the acquisition of verb inflection in normally developing children have revealed an astonishing pattern: children use correctly inflected verbs in their own speech but fail to make use of verb inflections when comprehending sentences uttered by others. Thus, a three-year old might well be able to say something like ‘The cat sleeps on the bed’, but fails to understand that the same sentence, when uttered by another person, refers to only one sleeping cat but not more than one. The previous studies that have examined children's comprehension of verb inflections have employed a variant of a picture selection task in which the child was asked to explicitly indicate (via pointing) what semantic meaning she had inferred from the test sentence. Recent research on other linguistic structures, such as pronouns or focus particles, has indicated that earlier comprehension abilities can be found when methods are used that do not require an explicit reaction, like preferential looking tasks. This dissertation aimed to examine whether children are truly not able to understand the connection the the verb form and the meaning of the sentence subject until the age of five years or whether earlier comprehension can be found when a different measure, preferential looking, is used. Additionally, children's processing of subject-verb agreement violations was examined. The three experiments of this thesis that examined children's comprehension of verb inflections revealed the following: German-speaking three- to four-year old children looked more to a picture showing one actor when hearing a sentence with a singular inflected verb but only when their eye gaze was tracked and they did not have to perform a picture selection task. When they were asked to point to the matching picture, they performed at chance-level. This pattern indicates asymmetries in children's language performance even within the receptive modality. The fourth experiment examined sensitivity to subject-verb agreement violations and did not reveal evidence for sensitivity toward agreement violations in three- and four-year old children, but only found that children's looking patterns were influenced by the grammatical violations at the age of five. The results from these experiments are discussed in relation to the existence of a production-comprehension asymmetry in the use of verb inflections and children's underlying grammatical knowledge.
E-Learning Symposium 2014
(2014)
Der Tagungsband zum E-Learning Symposium 2014 an der Universität Potsdam beleuchtet die diversen Zielgruppen und Anwendungsbereiche, die aktuell in der E-Learning-Forschung angesprochen werden. Während im letzten Symposium 2012 der Dozierende mit den unterschiedlichen Möglichkeiten der Studierendenaktivierung und Lehrgestaltung im Fokus der Diskussionen stand, werden in diesem Jahr in einem großen Teil der Beiträge die Studierenden ins Zentrum der Aufmerksamkeit gerückt. Dass nicht nur der Inhalt des Lernmediums für den Lernerfolg eine Rolle spielt, sondern auch dessen Unterhaltungswert und die Freude, die die Lernenden während des Prozesses der Wissensakquise empfinden, zeigt sehr anschaulich die Keynote von Linda Breitlauch zum Thema „Faites vos Jeux“ (Spielen Sie jetzt). Der Beitrag von Zoerner et al. verbindet den Gedanken des spiele-basierten Lernens mit dem nach wie vor aktuellen Thema des mobilen Lernens. Auch in diesem Forschungsbereich spielt die Fokussierung auf den Lernenden eine immer herausragendere Rolle. Einen Schritt weiter in Richtung Individualisierung geht in diesem Zusammenhang der eingeladene Vortrag von Christoph Rensing, der sich mit der Adaptivität von mobilen Lernanwendungen beschäftigt. Mit Hilfe zur Verfügung stehender Kontextinformationen sollen gezielt individuelle Lernprozesse unterstützt werden. Alle Beiträge, die sich auf mobile Applikationen und auf Spiele beziehen, sprechen auch die zwischenmenschliche Komponente am Lernen an. So wird neben der Mobilität insbesondere auch der Austausch von Lernobjekten zwischen Lernenden (vergleiche den Beitrag von Zoerner et al.) sowie die Kooperation zwischen Lernenden (siehe Beitrag von Kallookaran und Robra-Bissantz) diskutiert. Der interpersonelle Kontakt spielt allerdings ebenfalls in den Beiträgen ohne Spiel- oder App-Fokussierung eine Rolle. Tutoren werden beispielsweise zur Moderation von Lernprozessen eingesetzt und Lerngruppen gegründet um das problem-orientierte Lernen stärker in den Mittelpunkt zu rücken (siehe Beitrag von Mach und Dirwelis) bzw. näher am Bedarf der Studierenden zu arbeiten (wie in eingeladenen Vortrag von Tatiana N. Noskova sowie in dem Beitrag von Mach und Dirwelis beschrieben). In der Evaluation wird ebenfalls der Schritt weg von anonymen, akkumulierten statistischen Auswertungen hin zu individualisierten Nutzerprofilen im Bereich des Learning Analytics untersucht (vergleiche dazu den Beitrag von Ifenthaler). Neben der Schwerpunktsetzung auf die Lernenden und deren Mobilität rückt das Thema Transmedialität stärker ins Zentrum der Forschung. Während schon die Keynote mit ihrem Spielefokus darauf anspricht, geht es in weiteren Beiträgen darum Abläufe aus der analogen Welt bestmöglich in der digitalen Welt abzubilden. Lerninhalte, die bisher mittels Bildern und Texten für Lehrende und Lernende zugänglich gemacht wurden, werden nunmehr mit weiteren Medien, insbesondere Videos, angereichert um deren Verständnis zu erhöhen. Dies ist beispielsweise geeignet, um Bewegungsabläufe im Sport (vergleiche dazu den Beitrag von Owassapian und Hensinger) oder musikpraktische Übungen wie Bodyperkussion (beschrieben im Beitrag von Buschmann und Glasemann) zu erlernen Lernendenfokussierung, persönlicher Austausch, Mobilität und Transmedialität sind somit einige der Kernthemen, die Sie in diesem Sammelband erwarten. Auch zeigt die häufige Verknüpfung verschedener dieser Kernthemen, dass keines davon ein Randthema ist, sondern sich die Summe aus allen im E-Learning bündelt und damit eine neue Qualität für Lehre, Studium und Forschung erreicht werden kann.
Portal alumni
(2014)
Die Beliebtheit von Medienberufen ist ungebrochen. Das zeigt sich unter anderem an der Zahl der Studieninteressierten. So haben sich allein in diesem Jahr mehr als 1 500 junge Leute auf einen der 44 Plätze für den Studiengang Medienwissenschaft an der Universität Potsdam beworben. Nach ihrem erfolgreichen Abschluss allerdings konkurrieren die Absolventen am Arbeitsmarkt mit Tausenden Abgängern anderer Hochschulen aus Film-, Medien- und Kommunikationsstudiengängen. Das sind allein in der Region Berlin-Brandenburg jährlich etwa 1 500. Doch nach jahrzehntelangem Boom der Medienbranche hat sich der Arbeitsmarkt im vergangenen Jahrzehnt drastisch verändert. Konjunkturkrise, Kursrückgänge und rückläufige Werbeinvestitionen schwächten die Medien deutlich. Es folgten daraus schlechte Gewinnergebnisse, Einsparungen und Personalreduzierung, insbesondere bei den Printmedien. Die Insolvenz der Frankfurter Rundschau oder die Einstellung der Financial Times Deutschland sind nur zwei eklatante Beispiele. Auf der anderen Seite boomt der dynamische Online-Markt aufgrund des veränderten Nutzerverhaltens insbesondere der jungen Generation, die ihre Informationen zunehmend aus Internet, Apps und sozialen Netzwerken gewinnen. Die Berufsaussichten für all Jene, die „Irgendwas mit Medien“ studieren wollen sind zwar aufgrund des Arbeitsmarktes schwieriger geworden, sie sind aber dennoch vielfältig. Guter Journalismus wird weiterhin benötigt und auch Öffentlichkeitsarbeiter sind gefragt. Darüber hinaus stehen Absolventen der Kommunikationswissenschaften die Türen in die Medienplanung oder in der Markt- und Meinungsforschung offen. Und nicht zuletzt sind Experten in der Online-Branche gefragt. Portal alumni hat sich in diesem Jahr dafür interessiert, welche Karrierewege Absolventen der der Universität Potsdam in Medienberufen bisher gegangen sind. Dabei zeigt sich, dass auch hier die Wege selten linear verlaufen und berufliche Erfolge sich keineswegs leicht einstellten.
Research in rodents has shown that dietary vitamin A reduces body fat by enhancing fat mobilisation and energy utilisation; however, their effects in growing dogs remain unclear. In the present study, we evaluated the development of body weight and body composition and compared observed energy intake with predicted energy intake in forty-nine puppies from two breeds (twenty-four Labrador Retriever (LAB) and twenty-five Miniature Schnauzer (MS)). A total of four different diets with increasing vitamin A content between 5.24 and 104.80 mu mol retinol (5000-100 000 IU vitamin A)/4184 kJ (1000 kcal) metabolisable energy were fed from the age of 8 weeks up to 52 (MS) and 78 weeks (LAB). The daily energy intake was recorded throughout the experimental period. The body condition score was evaluated weekly using a seven-category system, and food allowances were adjusted to maintain optimal body condition. Body composition was assessed at the age of 26 and 52 weeks for both breeds and at the age of 78 weeks for the LAB breed only using dual-energy X-ray absorptiometry. The growth curves of the dogs followed a breed-specific pattern. However, data on energy intake showed considerable variability between the two breeds as well as when compared with predicted energy intake. In conclusion, the data show that energy intakes of puppies particularly during early growth are highly variable; however, the growth pattern and body composition of the LAB and MS breeds are not affected by the intake of vitamin A at levels up to 104.80 mu mol retinol (100 000 IU vitamin A)/4184 kJ (1000 kcal).
Previous studies suggest that there are special timing relations in syllable onsets. The consonants are assumed to be timed, on the one hand, with the vocalic nucleus and, on the other hand, with each other. These competing timing relations result in the C-center effect. However, the C-center effect has not consistently been found in languages with complex onsets. Moreover, it has occasionally been found in languages disallowing complex onsets. The present study investigates onset timing in German while discussing alternative explanations (not related to bonding) for the timing patterns observed. Six German speakers were recorded via Electromagnetic Articulography. The corpus contained items with four clusters (/sk/, /kv/, /gl/, and /pl/). The clusters occur in word-initial position, word-medial position, and across a word boundary preceding different vowels. The results suggest that segmental properties (i.e., oral-laryngeal coordination, coarticulatory resistance) determine the observed timing patterns, and specifically the absence or presence of the C-center effect.
The economic impact analysis contained in this book shows how irrigation farming is particularly susceptible when applying certain water management policies in the Australian Murray-Darling Basin, one of the world largest river basins and Australia’s most fertile region. By comparing different pricing and non-pricing water management policies with the help of the Water Integrated Market Model, it is found that the impact of water demand reducing policies is most severe on crops that need to be intensively irrigated and are at the same time less water productive. A combination of increasingly frequent and severe droughts and the application of policies that decrease agricultural water demand, in the same region, will create a situation in which the highly water dependent crops rice and cotton cannot be cultivated at all.
Das Projekt „Medienbildung in der LehrerInnenbildung“ hat das Ziel, den Einsatz digitaler Medien in den Lehramtsstudiengängen der Universität Potsdam nachhaltig zu fördern. Am Beispiel der Musiklehrerausbildung (Lehrstuhl für Musikpädagogik und Musikdidaktik) wurde ein Konzept für die Nutzung von Video-Podcasts in schulischen Praxisphasen entwickelt, um Studierende bei der Unterrichtsplanung zu unterstützen. Die fachspezifische Umsetzung des E-Learning-Ansatzes und die damit verbundenen Möglichkeiten und Heraus- forderungen werden gezeigt und betonen die Wichtigkeit der Zusammenarbeit zwischen Fachdidaktik und Mediendidaktik, um eine bedarfsorientierte Lösung zu finden, die praktisch umsetzbar ist.
The Runge-Kutta type regularization method was recently proposed as a potent tool for the iterative solution of nonlinear ill-posed problems. In this paper we analyze the applicability of this regularization method for solving inverse problems arising in atmospheric remote sensing, particularly for the retrieval of spheroidal particle distribution. Our numerical simulations reveal that the Runge-Kutta type regularization method is able to retrieve two-dimensional particle distributions using optical backscatter and extinction coefficient profiles, as well as depolarization information.
This study investigates the spatial and temporal distributions of 14 key arboreal taxa and their driving forces during the last 22,000 calendar years before ad 1950 (kyr BP) using a taxonomically harmonized and temporally standardized fossil pollen dataset with a 500-year resolution from the eastern part of continental Asia. Logistic regression was used to estimate pollen abundance thresholds for vegetation occurrence (presence or dominance), based on modern pollen data and present ranges of 14 taxa in China. Our investigation reveals marked changes in spatial and temporal distributions of the major arboreal taxa. The thermophilous (Castanea, Castanopsis, Cyclobalanopsis, Fagus, Pterocarya) and eurythermal (Juglans, Quercus, Tilia, Ulmus) broadleaved tree taxa were restricted to the current tropical or subtropical areas of China during the Last Glacial Maximum (LGM) and spread northward since c. 14.5 kyr BP. Betula and conifer taxa (Abies, Picea, Pinus), in contrast, retained a wider distribution during the LGM and showed no distinct expansion direction during the Late Glacial. Since the late mid-Holocene, the abundance but not the spatial extent of most trees decreased. The changes in spatial and temporal distributions for the 14 taxa are a reflection of climate changes, in particular monsoonal moisture, and, in the late Holocene, human impact. The post-LGM expansion patterns in eastern continental China seem to be different from those reported for Europe and North America, for example, the westward spread for eurythermal broadleaved taxa.
We study the thermal Markovian diffusion of tracer particles in a 2D medium with spatially varying diffusivity D(r), mimicking recently measured, heterogeneous maps of the apparent diffusion coefficient in biological cells. For this heterogeneous diffusion process (HDP) we analyse the mean squared displacement (MSD) of the tracer particles, the time averaged MSD, the spatial probability density function, and the first passage time dynamics from the cell boundary to the nucleus. Moreover we examine the non-ergodic properties of this process which are important for the correct physical interpretation of time averages of observables obtained from single particle tracking experiments. From extensive computer simulations of the 2D stochastic Langevin equation we present an in-depth study of this HDP. In particular, we find that the MSDs along the radial and azimuthal directions in a circular domain obey anomalous and Brownian scaling, respectively. We demonstrate that the time averaged MSD stays linear as a function of the lag time and the system thus reveals a weak ergodicity breaking. Our results will enable one to rationalise the diffusive motion of larger tracer particles such as viruses or submicron beads in biological cells.
Two-photon polymerization of hydrogels – versatile solutions to fabricate well-defined 3D structures
(2014)
Hydrogels are cross-linked water-containing polymer networks that are formed by physical, ionic or covalent interactions. In recent years, they have attracted significant attention because of their unique physical properties, which make them promising materials for numerous applications in food and cosmetic processing, as well as in drug delivery and tissue engineering. Hydrogels are highly water-swellable materials, which can considerably increase in volume without losing cohesion, are biocompatible and possess excellent tissue-like physical properties, which can mimic in vivo conditions. When combined with highly precise manufacturing technologies, such as two-photon polymerization (2PP), well-defined three-dimensional structures can be obtained. These structures can become scaffolds for selective cell-entrapping, cell/drug delivery, sensing and prosthetic implants in regenerative medicine. 2PP has been distinguished from other rapid prototyping methods because it is a non-invasive and efficient approach for hydrogel cross-linking. This review discusses the 2PP-based fabrication of 3D hydrogel structures and their potential applications in biotechnology. A brief overview regarding the 2PP methodology and hydrogel properties relevant to biomedical applications is given together with a review of the most important recent achievements in the field.
Regular and irregular inflection in children's production has been examined in many previous studies. Yet, little is known about the processes involved in children's recognition of inflected words. To gain insight into how children process inflected words, the current study examines regular -t and irregular -n participles of German using the cross-modal priming technique testing 108 monolingual German-speaking children in two age groups (group I, mean age: 8;4, group II, mean age: 9;9) and a control group of.. adults. Although both age groups of children had the same full priming effect as adults for -t forms, only children of age group II showed an adult-like (partial) priming effect for -n participles. We argue that children (within the age range tested) employ the same mechanisms for regular inflection as adults but that the lexical retrieval processes required for irregular forms become more efficient when children get older.
Surface modification with thermoresponsive polymer brushes for a switchable electrochemical sensor
(2014)
Elaboration of switchable surfaces represents an interesting way for the development of a new generation of electrochemical sensors. In this paper, a method for growing thermoresponsive polymer brushes from a gold surface pre-modified with polyethyleneimine (PEI), subsequent layer-by-layer polyelectrolyte assembly and adsorption of a charged macroinitiator is described. We propose an easy method for monitoring the coil-to-globule phase transition of the polymer brush using an electrochemical quartz crystal microbalance with dissipation (E-QCM-D). The surface of these polymer modified electrodes shows reversible switching from the swollen to the collapsed state with temperature. As demonstrated from E-QCM-D measurements using an original signal processing method, the switch is operating in three reversible steps related to different interfacial viscosities. Moreover, it is shown that the one electron oxidation of ferrocene carboxylic acid is dramatically affected by the change from the swollen to the collapsed state of the polymer brush, showing a spectacular 86% decrease of the charge transfer resistance between the two states.
Processes having the same bridges as a given reference Markov process constitute its reciprocal class. In this paper we study the reciprocal class of compound Poisson processes whose jumps belong to a finite set A in R^d. We propose a characterization of the reciprocal class as the unique set of probability measures on which a family of time and space transformations induces the same density, expressed in terms of the reciprocal invariants. The geometry of A plays a crucial role in the design of the transformations, and we use tools from discrete geometry to obtain an optimal characterization. We deduce explicit conditions for two Markov jump processes to belong to the same class. Finally, we provide a natural interpretation of the invariants as short-time asymptotics for the probability that the reference process makes a cycle around its current state.
It is generally agreed upon that stars typically form in open clusters and stellar associations, but little is known about the structure of the open cluster system. Do open clusters and stellar associations form isolated or do they prefer to form in groups and complexes? Open cluster groups and complexes could verify star forming regions to be larger than expected, which would explain the chemical homogeneity over large areas in the Galactic disk. They would also define an additional level in the hierarchy of star formation and could be used as tracers for the scales of fragmentation in giant molecular clouds? Furthermore, open cluster groups and complexes could affect Galactic dynamics and should be considered in investigations and simulations on the dynamical processes, such as radial migration, disc heating, differential rotation, kinematic resonances, and spiral structure.
In the past decade there were a few studies on open cluster pairs (de La Fuente Marcos & de La Fuente Marcos 2009a,b,c) and on open cluster groups and complexes (Piskunov et al. 2006). The former only considered spatial proximity for the identification of the pairs, while the latter also required tangential velocities to be similar for the members. In this work I used the full set of 6D phase-space information to draw a more detailed picture on these structures. For this purpose I utilised the most homogeneous cluster catalogue available, namely the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005a,b), which contains parameters for 650 open clusters and compact associations, as well as for their uniformly selected members. Additional radial velocity (RV) and metallicity ([M/H]) information on the members were obtained from the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006; Kordopatis et al. 2013) for 110 and 81 clusters, respectively. The RAVE sample was cleaned considering quality parameters and flags provided by RAVE (Matijevič et al. 2012; Kordopatis et al. 2013). To ensure that only real members were included for the mean values, also the cluster membership, as provided by Kharchenko et al. (2005a,b), was considered for the stars cross-matched in RAVE.
6D phase-space information could be derived for 432 out of the 650 COCD objects and I used an adaption of the Friends-of-Friends algorithm, as used in cosmology, to identify potential groupings. The vast majority of the 19 identified groupings were pairs, but I also found four groups of 4-5 members and one complex with 15 members. For the verification of the identified structures, I compared the results to a randomly selected subsample of the catalogue for the Milky Way global survey of Star Clusters (MWSC; Kharchenko et al. 2013), which became available recently, and was used as reference sample. Furthermore, I implemented Monte-Carlo simulations with randomised samples created from two distinguished input distributions for the spatial and velocity parameters. On the one hand, assuming a uniform distribution in the Galactic disc and, on the other hand, assuming the COCD data distributions to be representative for the whole open cluster population.
The results suggested that the majority of identified pairs are rather by chance alignments, but the groups and the complex seemed to be genuine. A comparison of my results to the pairs, groups and complexes proposed in the literature yielded a partial overlap, which was most likely because of selection effects and different parameters considered. This is another verification for the existence of such structures.
The characteristics of the found groupings favour that members of an open cluster grouping originate from a common giant molecular cloud and formed in a single, but possibly sequential, star formation event. Moreover, the fact that the young open cluster population showed smaller spatial separations between nearest neighbours than the old cluster population indicated that the lifetime of open cluster groupings is most likely comparable to that of the Galactic open cluster population itself. Still even among the old open clusters I could identify groupings, which suggested that the detected structure could be in some cases more long lived as one might think.
In this thesis I could only present a pilot study on structures in the Galactic open cluster population, since the data sample used was highly incomplete. For further investigations a far more complete sample would be required. One step in this direction would be to use data from large current surveys, like SDSS, RAVE, Gaia-ESO and VVV, as well as including results from studies on individual clusters. Later the sample can be completed by data from upcoming missions, like Gaia and 4MOST. Future studies using this more complete open cluster sample will reveal the effect of open cluster groupings on star formation theory and their significance for the kinematics, dynamics and evolution of the Milky Way, and thereby of spiral galaxies.
A new functional luminescent lanthanide complex (LLC) has been synthesized with terbium as a central lanthanide ion and biotin as a functional moiety. Unlike in typical lanthanide complexes assembled via carboxylic moieties, in the presented complex, four phosphate groups are chelating the central lanthanide ion. This special chemical assembly enhances the complex stability in phosphate buffers conventionally used in biochemistry. The complex synthesis strategy and photophysical properties are described as well as the performance in time-resolved Förster Resonance Energy Transfer (FRET) assays. In those assays, this biotin-LLC transferred energy either to acceptor organic dyes (Cy5 or AF680) labelled on streptavidin or to quantum dots (QD655 or QD705) surface-functionalised with streptavidins. The permanent spatial donor–acceptor proximity is assured through strong and stable biotin–streptavidin binding. The energy transfer is evidenced from the quenching observed in donor emission and from a decrease in donor luminescence decay, both associated with simultaneous increase in acceptor intensity and in the decay time. The dye-based assays are realised in TRIS and in PBS, whereas QD-based systems are studied in borate buffer. The delayed emission analysis allows for quantifying the recognition process and for auto-fluorescence-free detection, which is particularly relevant for application in bioanalysis. In accordance with Förster theory, Förster-radii (R0) were found to be around 60 Å for organic dyes and around 105 Å for QDs. The FRET efficiency (η) reached 80% and 25% for dye and QD acceptors, respectively. Physical donor–acceptor distances (r) have been determined in the range 45–60 Å for organic dye acceptors, while for acceptor QDs between 120 Å and 145 Å. This newly synthesised biotin-LLC extends the class of highly sensitive analytical tools to be applied in the bioanalytical methods such as time-resolved fluoroimmunoassays (TR-FIA), luminescent imaging and biosensing.
Polyadenylation is a decisive 3’ end processing step during the maturation of pre-mRNAs. The length of the poly(A) tail has an impact on mRNA stability, localization and translatability. Accordingly, many eukaryotic organisms encode several copies of canonical poly(A) polymerases (cPAPs). The disruption of cPAPs in mammals results in lethality. In plants, reduced cPAP activity is non-lethal. Arabidopsis encodes three nuclear cPAPs, PAPS1, PAPS2 and PAPS4, which are constitutively expressed throughout the plant. Recently, the detailed analysis of Arabidopsis paps1 mutants revealed a subset of genes that is preferentially polyadenylated by the cPAP isoform PAPS1 (Vi et al. 2013). Thus, the specialization of cPAPs might allow the regulation of different sets of genes in order to optimally face developmental or environmental challenges.
To gain insights into the cPAP-based gene regulation in plants, the phenotypes of Arabidopsis cPAPs mutants under different conditions are characterized in detail in the following work. An involvement of all three cPAPs in flowering time regulation and stress response regulation is shown. While paps1 knockdown mutants flower early, paps4 and paps2 paps4 knockout mutants exhibit a moderate late-flowering phenotype. PAPS1 promotes the expression of the major flowering inhibitor FLC, supposedly by specific polyadenylation of an FLC activator. PAPS2 and PAPS4 exhibit partially overlapping functions and ensure timely flowering by repressing FLC and at least one other unidentified flowering inhibitor. The latter two cPAPs act in a novel regulatory pathway downstream of the autonomous pathway component FCA and act independently from the polyadenylation factors and flowering time regulators CstF64 and FY. Moreover, PAPS1 and PAPS2/PAPS4 are implicated in different stress response pathways in Arabidopsis. Reduced activity of the poly(A) polymerase PAPS1 results in enhanced resistance to osmotic and oxidative stress. Simultaneously, paps1 mutants are cold-sensitive. In contrast, PAPS2/PAPS4 are not involved in the regulation of osmotic or cold stress, but paps2 paps4 loss-of-function mutants exhibit enhanced sensitivity to oxidative stress provoked in the chloroplast. Thus, both PAPS1 and PAPS2/PAPS4 are required to maintain a balanced redox state in plants. PAPS1 seems to fulfil this function in concert with CPSF30, a polyadenylation factor that regulates alternative polyadenylation and tolerance to oxidative stress.
The individual paps mutant phenotypes and the cPAP-specific genetic interactions support the model of cPAP-dependent polyadenylation of selected mRNAs. The high similarity of the polyadenylation machineries in yeast, mammals and plants suggests that similar regulatory mechanisms might be present in other organism groups. The cPAP-dependent developmental and physiological pathways identified in this work allow the design of targeted experiments to better understand the ecological and molecular context underlying cPAP-specialization.
Con la sua proposta di una Somaestetica, articolata fondamentalmente in analitica, pragmatica e pratica, Richard Shusterman intende in primo luogo fornire e creare una cornice metodologica, un orientamento unitario che sia in grado di rintracciare, ricostruire e portare a manifestazione - all’interno di eterogenee riflessioni teoriche e pratiche somatiche - la comune esigenza di ridare luce alla dimensione corporea come modo primario di essere nel mondo. Recuperando l’accezione baumgarteniana di Aesthetica come gnoseologia inferiore, arte dell’analogo della ragione, scienza della conoscenza sensibile, la somaestetica intende dare nuovo impulso alla più profonda radice di estetica e filosofia che coglie la vita nel suo processo di metamorfosi e rigenerazione continua, in quel respiro vitale che, per quanto possa diventare cosciente, non è mai totalmente afferrabile dalla ragione discorsiva, situandosi piuttosto in quello spazio primordiale in cui coscienza e corpo si coappartengono, in cui il soggetto non è ancora individualizzabile perché fuso con l’ambiente, non è totalmente privatizzabile perché intrinsecamente plasmato dal tessuto sociale cui egli stesso conferisce dinamicamente forma. A partire dunque dalla rivalutazione del concetto di Aisthesis la disciplina somaestetica mira ad una intensificazione di sensorialità, percezione, emozione, commozione, rintracciando proprio nel Soma la fonte di quelle facoltà “inferiori” irriducibili a quelle puramente intellettuali, che permettono di accedere alle dimensioni qualitative dell’esperienza, di portare a manifestazione e far maturare l’essere umano come essere indivisibile che non si lascia incontrare da un pensiero che ne rinnega l’unitarietà in nome di fittizie e laceranti distinzioni dicotomiche. Nel corpo infatti si radicano in modo silente regole, convenzioni, norme e valori socioculturali che determinano e talvolta limitano la configurazione ed espressione di sensazioni, percezioni, cognizioni, pensieri, azioni, volizioni, disposizioni di un soggetto da sempre inserito in una Mitwelt (mondo comune), ed è allora proprio al corpo che bisogna rivolgersi per riconfigurare più autentiche modalità di espressione del soggetto che crea equilibri dinamici per mantenere una relazione di coerenza con il più ampio contesto sociale, culturale, ambientale. L’apertura al confronto con eterogenee posizioni filosofiche e l’intrinseca multidisciplinarietà spiegano la centralità nel contemporaneo dibattito estetologico internazionale della Somaestetica che, rivolgendosi tanto ad una formulazione teorica quanto ad una concreta applicazione pratica, intende rivalutare il soma come corporeità intelligente, senziente, intenzionale e attiva, non riducibile all’accezione peccaminosa di caro (mero corpo fisico privo di vita e sensazione). Attraverso la riflessione e la pratica di tecniche di coscienza somatica si portano in primo piano i modi in cui il sempre più consapevole rapporto con la propria corporeità come mediatamente esperita e immediatamente vissuta, sentita, offre occasioni autentiche di realizzazione progressiva di sé innanzitutto come persone, capaci di autocoltivazione, di riflessione cosciente sulle proprie abitudini incorporate, di ristrutturazione creativa di sé, di intensificata percezione e apprezzamento sensoriale sia nel concreto agire quotidiano, sia nella dimensione più propriamente estetologica di ricezione, fruizione e creazione artistica. L’indirizzo essenzialmente pragmatista della riflessione di Shusterman traccia così una concezione fondamentalmente relazionale dell’estetica in grado di porsi proprio nel movimento e nel rapporto continuamente diveniente di vera e propria trasformazione e passaggio tra le dimensioni fisiche, proprio-corporee, psichiche e spirituali del soggetto la cui interazione, ed il cui reciproco riversarsi le une nelle altre, può risultare profondamente arricchito attraverso una progressiva e sempre crescente consapevolizzazione della ricchezza della dimensione corporea in quanto intenzionale, percettiva, senziente, volitiva, tanto quanto vulnerabile, limitante, caduca, patica. Il presente lavoro intende ripercorrere ed approfondire alcuni dei principali referenti di Shusterman, focalizzandosi prevalentemente sulla radice pragmatista della sua proposta e sul confronto con il dibattito di area tedesca tra estetica, antropologia filosofica, neofenomenologia e antropologia medica, per riguadagnare una nozione di soma che proprio a partire dal contrasto, dall’impatto irriducibile con la potenza annullante delle situazioni limite, della crisi possa acquisire un più complesso e ricco valore armonizzante delle intrinseche e molteplici dimensioni che costituiscono il tessuto della soggettività incarnata. In particolare il primo capitolo (1. Somaestetica) chiarisce le radici essenzialmente pragmatiste della proposta shustermaniana e mostra come sia possibile destrutturare e dunque riconfigurare radicati modi di esperienza, rendendo coscienti abitudini e modi di vivere che si fissano a livello somatico in modo per lo più inavvertito. Il confronto con la nozione di Habitus, di cui Pierre Bourdieu mette brillantemente in luce l’invisibile e socialmente determinata matrice somatica, lascia scorgere come ogni manifestazione umana sia sostenuta dall’incorporazione di norme, credenze, valori che determinano e talvolta limitano l’espressione, lo sviluppo, persino le predisposizioni e le inclinazioni degli individui. Ed è proprio intervenendo a questo livello che si può restituire libertà alle scelte e aprirsi così alle dimensioni essenzialmente qualitative dell’esperienza che, nell’accezione deweyana è un insieme olistico unitario e coeso che fa da sfondo alle relazioni organismo-ambiente, un intreccio inestricabile di teoria e prassi, particolare e universale, psiche e soma, ragione ed emozione, percettivo e concettuale, insomma quell’immediata conoscenza corporea che struttura lo sfondo di manifestazione della coscienza.
World market governance
(2014)
Democratic capitalism or liberal democracy, as the successful marriage of convenience between market liberalism and democracy sometimes is called, is in trouble. The market economy system has become global and there is a growing mismatch with the territoriality of the nation-states. The functional global networks and inter-governmental order can no longer keep pace with the rapid development of the global market economy and regulatory capture is all too common. Concepts like de-globalization, self-regulation, and global government are floated in the debate. The alternatives are analysed and found to be improper, inadequate or plainly impossible. The proposed route is instead to accept that the global market economy has developed into an independent fundamental societal system that needs its own governance. The suggestion is World Market Governance based on the Rule of Law in order to shape the fitness environment for the global market economy and strengthen the nation-states so that they can regain the sovereignty to decide upon the social and cultural conditions in each country. Elements in the proposed Rule of Law are international legislation decided by an Assembly supported by a Council, and an independent Judiciary. Existing international organisations would function as executors. The need for broad sustained demand for regulations in the common interest is identified.
Bats are important components in tropical mammal assemblages. Unravelling the mechanisms allowing multiple syntopic bat species to coexist can provide insights into community ecology. However, dietary information on component species of these assemblages is often difficult to obtain. Here we measuredstable carbon and nitrogen isotopes in hair samples clipped from the backs of 94 specimens to indirectly examine whether trophic niche differentiation and microhabitat segregation explain the coexistence of 16 bat species at Ankarana, northern Madagascar. The assemblage ranged over 4.4% in delta N-15 and was structured into two trophic levels with phytophagous Pteropodidae as primary consumers (c. 3% enriched over plants) and different insectivorous bats as secondary consumers (c. 4% enriched over primary consumers). Bat species utilizing different microhabitats formed distinct isotopic clusters (metric analyses of delta C-13-delta N-15 bi-plots), but taxa foraging in the same microhabitat did not show more pronounced trophic differentiation than those occupying different microhabitats. As revealed by multivariate analyses, no discernible feeding competition was found in the local assemblage amongst congeneric species as compared with non-congeners. In contrast to ecological niche theory, but in accordance with studies on New and Old World bat assemblages, competitive interactions appear to be relaxed at Ankarana and not a prevailing structuring force.
Based on extensive Monte Carlo simulations and analytical considerations we study the electrostatically driven adsorption of flexible polyelectrolyte chains onto charged Janus nanospheres. These net-neutral colloids are composed of two equally but oppositely charged hemispheres. The critical binding conditions for polyelectrolyte chains are analysed as function of the radius of the Janus particle and its surface charge density, as well as the salt concentration in the ambient solution. Specifically for the adsorption of finite-length polyelectrolyte chains onto Janus nanoparticles, we demonstrate that the critical adsorption conditions drastically differ when the size of the Janus particle or the screening length of the electrolyte are varied. We compare the scaling laws obtained for the adsorption–desorption threshold to the known results for uniformly charged spherical particles, observing significant disparities. We also contrast the changes to the polyelectrolyte chain conformations close to the surface of the Janus nanoparticles as compared to those for simple spherical particles. Finally, we discuss experimentally relevant physico-chemical systems for which our simulations results may become important. In particular, we observe similar trends with polyelectrolyte complexation with oppositely but heterogeneously charged proteins.
Civil society is either considered as a motor of democratization or stabilizer of authoritarian rule. This dichotomy is partly due to the dominance of domains-based definitions of the concept that reduce civil society to a small range of formally organized, independent and democratically oriented NGOs. Additionally, research often treats civil society as a ‘black box’ without differentiating between potential variations in impact of different types of civil society actors on existing regime structures. In this thesis, I present an alternative conceptualization of civil society based on the interactions of societal actors to arrive at a more inclusive understanding of the term which is more suited for analysis in non-democratic settings. The operationalization of the action-based approach I develop allows for an empirical assessment of a large range of societal activities that can accordingly be categorized from little to very civil society-like depending on their specific modes of interactions within four dimensions. I employ this operationalization in a qualitative case study including different actors in the authoritarian monarchy of Jordan which suggests that Jordanian societal actors mostly exhibit tolerant and democratically oriented modes of interaction and do not reproduce authoritarian patterns. However, even democratically oriented actors do not necessarily take on an oppositional positions vis-à-vis the authoritarian regime. Thus, the Jordanian civil society might not feature a high potential to challenge existing power structures in the country.
We establish in this paper the existence of weak solutions of infinite-dimensional shift invariant stochastic differential equations driven by a Brownian term. The drift function is very general, in the sense that it is supposed to be neither small or continuous, nor Markov. On the initial law we only assume that it admits a finite specific entropy. Our result strongly improves the previous ones obtained for free dynamics with a small perturbative drift. The originality of our method leads in the use of the specific entropy as a tightness tool and on a description of such stochastic differential equation as solution of a variational problem on the path space.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
We develop a new approach to the analysis of pseudodifferential operators with small parameter 'epsilon' in (0,1] on a compact smooth manifold X. The standard approach assumes action of operators in Sobolev spaces whose norms depend on 'epsilon'. Instead we consider the cylinder [0,1] x X over X and study pseudodifferential operators on the cylinder which act, by the very nature, on functions depending on 'epsilon' as well. The action in 'epsilon' reduces to multiplication by functions of this variable and does not include any differentiation. As but one result we mention asymptotic of solutions to singular perturbation problems for small values of 'epsilon'.
Luhmann in the Contact Zone
(2014)
Postcolonial piracy
(2014)
Across the global South, new media technologies have brought about new forms of cultural production, distribution and reception. The spread of cassette recorders in the 1970s; the introduction of analogue and digital video formats in the 80s and 90s; the pervasive availability of recycled computer hardware; the global dissemination of the internet and mobile phones in the new millennium: all these have revolutionised the access of previously marginalised populations to the cultural flows of global modernity.
Yet this access also engenders a pirate occupation of the modern: it ducks and deranges the globalised designs of property, capitalism and personhood set by the North. Positioning itself against Eurocentric critiques by corporate lobbies, libertarian readings or classical Marxist interventions, this volume offers a profound postcolonial revaluation of the social, epistemic and aesthetic workings of piracy. It projects how postcolonial piracy persistently negotiates different trajectories of property and self at the crossroads of the global and the local.
Inhalt der Arbeit ist es, einen Überblick über die historische Entwicklung der öffentlich-rechtlichen Gefährdungshaftung in Deutschland vom 18. Jahrhundert bis heute zu geben sowie ihre praktische Bedeutung zu analysieren. Dabei wird zwischen den unterschiedlichen Gesetzgebungen, Rechtsprechungen und den theoretischen Lösungsansätzen der öffentlich-rechtlichen Gefährdungshaftung unterschieden und insbesondere letzteres problematisiert. Ferner wird auf das Verhältnis zu den grundrechtlichen Schutzpflichten, den sozialen Risikotatbeständen, dem sozialrechtlichen Herstellungsanspruch und den Tumultschäden eingegangen.
Recently, C K-edge Near Edge X-ray Absorption Fine Structure (NEXAFS) spectra of graphite (HOPG) surfaces have been measured for the pristine material, and for HOPG treated with either bromine or krypton plasmas (Lippitz et al., Surf. Sci., 2013, 611, L1). Changes of the NEXAFS spectra characteristic for physical (krypton) and/or chemical/physical modifications of the surface (bromine) upon plasma treatment were observed. Their molecular origin, however, remained elusive. In this work we study by density functional theory, the effects of selected point and line defects as well as chemical modifications on NEXAFS carbon K-edge spectra of single graphene layers. For Br-treated surfaces, also Br 3d X-ray Photoelectron Spectra (XPS) are simulated by a cluster approach, to identify possible chemical modifications. We observe that some of the defects related to plasma treatment lead to characteristic changes of NEXAFS spectra, similar to those in experiment. Theory provides possible microscopic origins for these changes.
Nested application conditions generalise the well-known negative application conditions and are important for several application domains. In this paper, we present Local Church-Rosser, Parallelism, Concurrency and Amalgamation Theorems for rules with nested application conditions in the framework of M-adhesive categories, where M-adhesive categories are slightly more general than weak adhesive high-level replacement categories. Most of the proofs are based on the corresponding statements for rules without application conditions and two shift lemmas stating that nested application conditions can be shifted over morphisms and rules.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
As an engineering material derived from renewable resources, wood possesses excellent mechanical properties in view of its light weight but also has some disadvantages such as low dimensional stability upon moisture changes and low durability against biological attack. Polymerization of hydrophobic monomers in the cell wall is one of the potential approaches to improve the dimensional stability of wood. A major challenge is to insert hydrophobic monomers into the hydrophilic environment of the cell walls, without increasing the bulk density of the material due to lumen filling. Here, we report on an innovative and simple method to insert styrene monomers into tosylated cell walls (i.e. –OH groups from natural wood polymers are reacted with tosyl chloride) and carry out free radical polymerization under relatively mild conditions, generating low wood weight gains. In-depth SEM and confocal Raman microscopy analysis are applied to reveal the distribution of the polystyrene in the cell walls and the lumen. The embedding of polystyrene in wood results in reduced water uptake by the wood cell walls, a significant increase in dimensional stability, as well as slightly improved mechanical properties measured by nanoindentation.
Materials derived from renewable resources are highly desirable in view of more sustainable manufacturing. Among the available natural materials, wood is one of the key candidates, because of its excellent mechanical properties. However, wood and wood-based materials in engineering applications suffer from various restraints, such as dimensional instability upon humidity changes. Several wood modification treatments increase water repellence, but the insertion of hydrophobic polymers can result in a composite material which cannot be considered as renewable anymore. In this study, we report on the grafting of the fully biodegradable poly(ε-caprolactone) (PCL) inside the wood cell walls by Sn(Oct)2 catalysed ring-opening polymerization (ROP). The presence of polyester chains within the wood cell wall structure is monitored by confocal Raman imaging and spectroscopy as well as scanning electron microscopy. Physical tests reveal that the modified wood is more hydrophobic due to the bulking of the cell wall structure with the polyester chains, which results in a novel fully biodegradable wood material with improved dimensional stability.
Inhalt:
Alexander von Humboldt-Forschungsstelle: Ingo Schwarz zum 65. Geburtstag
Ottmar Ette: Findung und Erfindung einer Leserschaft. Neuere Editionsprojekte zu Alexander von Humboldt als Grundlage und Herausforderung künftigen Forschens
Eberhard Knobloch: Alexandre de Humboldt et le Marquis de Laplace
Oliver Schwarz: Alexander von Humboldt als astronomischer Arbeiter, Diskussionspartner und Ideengeber
Petra Werner: Innenwelten und bleiche Gärten. Alexander von Humboldt untertage und in der Caripe-Höhle
Christian Suckow: Alexander von Humboldt in Ust’-Kamenogorsk
Anne Jobst: Neue Briefe Christian Gottfried Ehrenbergs an Alexander von Humboldt
Thomas Schmuck: Humboldt, Baer und die Evolution
Manfred Ringmacher: Zwei Briefe auf Guaraní in Alexander von Humboldts Handschrift
Ute Tintemann: Julius Klaproths Mithridates-Projekt, Alexander von Humboldt und das Verlagshaus Cotta
Ulrike Leitner: „Ja! Wenn Berlin Bonn wäre!“ Friedrich Rückerts Berufung nach Berlin
Frank Holl: „Zur Freiheit bestimmt“ – Alexander von Humboldts Blick auf die Kulturen der Welt
Sebastian Panwitz: Das Humboldt-Mendelssohn-Haus Jägerstraße 22. Ein Quellenfund
Laura Péaud: Du Mexique à l‘Oural : l‘expertise humboldtienne au service du politique
Bärbel Holtz: „Cicerone“ des Königs? Alexander von Humboldt und Friedrich Wilhelm III.
Menso Folkerts: Ein unerwartetes Zusammentreffen in Sanssouci. Alexander von Humboldt und Karl Ludwig Hencke an der Tafel Friedrich Wilhelms IV.
Ulrich Päßler: Preußens Mann in Washington. Fünf Briefe Friedrich von Gerolts an Alexander von Humboldt (1858/1859)
Bill Roba: German-Iowan Strategies in Celebrating the Centennial of Alexander von Humboldt’s Birth
Regina Mikosch: Ingo Schwarz‘ Veröffentlichungen zur Alexander von Humboldt
Über die Autoren
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
In March 2010, the project CoCoCo (incipient COntinent-COntinent COllision) recorded a 650 km long amphibian N-S wide-angle seismic profile, extending from the Eratosthenes Seamount (ESM) across Cyprus and southern Turkey to the Anatolian plateau. The aim of the project is to reveal the impact of the transition from subduction to continent-continent collision of the African plate with the Cyprus-Anatolian plate. A visual quality check, frequency analysis and filtering were applied to the seismic data and reveal a good data quality. Subsequent first break picking, finite-differences ray tracing and inversion of the offshore wide-angle data leads to a first-arrival tomographic model. This model reveals (1) P-wave velocities lower than 6.5 km/s in the crust, (2) a variable crustal thickness of about 28 - 37 km and (3) an upper crustal reflection at 5 km depth beneath the ESM. Two land shots on Turkey, also recorded on Cyprus, airgun shots south of Cyprus and geological and previous seismic investigations provide the information to derive a layered velocity model beneath the Anatolian plateau and for the ophiolite complex on Cyprus. The analysis of the reflections provides evidence for a north-dipping plate subducting beneath Cyprus. The main features of this layered velocity model are (1) an upper and lower crust with large lateral changes of the velocity structure and thickness, (2) a Moho depth of about 38 - 45 km beneath the Anatolian plateau, (3) a shallow north-dipping subducting plate below Cyprus with an increasing dip and (4) a typical ophiolite sequence on Cyprus with a total thickness of about 12 km. The offshore-onshore seismic data complete and improve the information about the velocity structure beneath Cyprus and the deeper part of the offshore tomographic model. Thus, the wide-angle seismic data provide detailed insights into the 2-D geometry and velocity structures of the uplifted and overriding Cyprus-Anatolian plate. Subsequent gravity modelling confirms and extends the crustal P-wave velocity model. The deeper part of the subducting plate is constrained by the gravity data and has a dip angle of ~ 28°. Finally, an integrated analysis of the geophysical and geological information allows a comprehensive interpretation of the crustal structure related to the collision process.
In dieser Arbeit werden nichtlineare Kopplungsmechanismen von akustischen Oszillatoren untersucht, die zu Synchronisation führen können. Aufbauend auf die Fragestellungen vorangegangener Arbeiten werden mit Hilfe theoretischer und experimenteller Studien sowie mit Hilfe numerischer Simulationen die Elemente der Tonentstehung in der Orgelpfeife und die Mechanismen der gegenseitigen Wechselwirkung von Orgelpfeifen identifiziert. Daraus wird erstmalig ein vollständig auf den aeroakustischen und fluiddynamischen Grundprinzipien basierendes nichtlinear gekoppeltes Modell selbst-erregter Oszillatoren für die Beschreibung des Verhaltens zweier wechselwirkender Orgelpfeifen entwickelt. Die durchgeführten Modellrechnungen werden mit den experimentellen Befunden verglichen. Es zeigt sich, dass die Tonentstehung und die Kopplungsmechanismen von Orgelpfeifen durch das entwickelte Oszillatormodell in weiten Teilen richtig beschrieben werden. Insbesondere kann damit die Ursache für den nichtlinearen Zusammenhang von Kopplungsstärke und Synchronisation des gekoppelten Zwei-Pfeifen Systems, welcher sich in einem nichtlinearen Verlauf der Arnoldzunge darstellt, geklärt werden. Mit den gewonnenen Erkenntnissen wird der Einfluss des Raumes auf die Tonentstehung bei Orgelpfeifen betrachtet. Dafür werden numerische Simulationen der Wechselwirkung einer Orgelpfeife mit verschiedenen Raumgeometrien, wie z. B. ebene, konvexe, konkave, und gezahnte Geometrien, exemplarisch untersucht. Auch der Einfluss von Schwellkästen auf die Tonentstehung und die Klangbildung der Orgelpfeife wird studiert. In weiteren, neuartigen Synchronisationsexperimenten mit identisch gestimmten Orgelpfeifen, sowie mit Mixturen wird die Synchronisation für verschiedene, horizontale und vertikale Pfeifenabstände in der Ebene der Schallabstrahlung, untersucht. Die dabei erstmalig beobachteten räumlich isotropen Unstetigkeiten im Schwingungsverhalten der gekoppelten Pfeifensysteme, deuten auf abstandsabhängige Wechsel zwischen gegen- und gleichphasigen Sychronisationsregimen hin. Abschließend wird die Möglichkeit dokumentiert, das Phänomen der Synchronisation zweier Orgelpfeifen durch numerische Simulationen, also der Behandlung der kompressiblen Navier-Stokes Gleichungen mit entsprechenden Rand- und Anfangsbedingungen, realitätsnah abzubilden. Auch dies stellt ein Novum dar.
The zero-noise limit of differential equations with singular coefficients is investigated for the first time in the case when the noise is a general alpha-stable process. It is proved that extremal solutions are selected and the probability of selection is computed. Detailed analysis of the characteristic function of an exit time form on the half-line is performed, with a suitable decomposition in small and large jumps adapted to the singular drift.
Bürgerbeteiligungsverfahren auf lokaler Ebene sollen die Mitwirkungschancen der BürgerInnen vergrößern. Auf Landkreisebene werden diese bislang wenig genutzt. Dies verwundert kaum, denn die Identifikation der BürgerInnen mit dieser administrativen Ebene ist deutlich geringer als mit ihren Gemeinden. Das fehlende Wissen über Zuständigkeiten verstärkt diesen Effekt. Als einer der ersten deutschen Landkreise führte der Landkreis Mansfeld-Südharz (Sachsen-Anhalt) in den Jahren 2012-2013 ein Bürgerhaushaltsverfahren durch. Dieses Gutachten dient der Evaluation dieses demokratischen Experiments. Dabei sollen Vorschläge zu dessen möglicher Weiterführung gemacht werden. Die vom Gutachter geführten Interviews mit Kreistagsmitgliedern und Führungskräften der Kreisverwaltung zeigten eine generell positive Einschätzung des Bürgerhaushalts und die weit verbreitete Bereitschaft, dieses Experiment fortzusetzen. Im Gutachten werden drei unterschiedliche Szenarien für eine mögliche Fortsetzung des Bürgerhaushalts im Landkreis entwickelt. Das „Weiter so“-Szenario mit viel Kontinuität, die Entwicklung eines Bürgerhaushalts in Kooperation mit der organisierten Zivilgesellschaft oder eines Bürgerhaushalts der interkommunalen Zusammenarbeit.
Fluid intelligence (fluid IQ), defined as the capacity for rapid problem solving and behavioral adaptation, is known to be modulated by learning and experience. Both stressful life events (SLES) and neural correlates of learning [specifically, a key mediator of adaptive learning in the brain, namely the ventral striatal representation of prediction errors (PE)] have been shown to be associated with individual differences in fluid IQ. Here, we examine the interaction between adaptive learning signals (using a well-characterized probabilistic reversal learning task in combination with fMRI) and SLES on fluid IQ measures. We find that the correlation between ventral striatal BOLD PE and fluid IQ, which we have previously reported, is quantitatively modulated by the amount of reported SLES. Thus, after experiencing adversity, basic neuronal learning signatures appear to align more closely with a general measure of flexible learning (fluid IQ), a finding complementing studies on the effects of acute stress on learning. The results suggest that an understanding of the neurobiological correlates of trait variables like fluid IQ needs to take socioemotional influences such as chronic stress into account.
Monoclonal antibodies (mAbs) are engineered immunoglobulins G (IgG) used for more than 20 years as targeted therapy in oncology, infectious diseases and (auto-)immune disorders. Their protein nature greatly influences their pharmacokinetics (PK), presenting typical linear and non-linear behaviors.
While it is common to use empirical modeling to analyze clinical PK data of mAbs, there is neither clear consensus nor guidance to, on one hand, select the structure of classical compartment models and on the other hand, interpret mechanistically PK parameters. The mechanistic knowledge present in physiologically-based PK (PBPK) models is likely to support rational classical model selection and thus, a methodology to link empirical and PBPK models is desirable. However, published PBPK models for mAbs are quite diverse in respect to the physiology of distribution spaces and the parameterization of the non-specific elimination involving the neonatal Fc receptor (FcRn) and endogenous IgG (IgGendo). The remarkable discrepancy between the simplicity of biodistribution data and the complexity of published PBPK models translates in parameter identifiability issues.
In this thesis, we address this problem with a simplified PBPK model—derived from a hierarchy of more detailed PBPK models and based on simplifications of tissue distribution model. With the novel tissue model, we are breaking new grounds in mechanistic modeling of mAbs disposition: We demonstrate that binding to FcRn is indeed linear and that it is not possible to infer which tissues are involved in the unspecific elimination of wild-type mAbs. We also provide a new approach to predict tissue partition coefficients based on mechanistic insights: We directly link tissue partition coefficients (Ktis) to data-driven and species-independent published antibody biodistribution coefficients (ABCtis) and thus, we ensure the extrapolation from pre-clinical species to human with the simplified PBPK model. We further extend the simplified PBPK model to account for a target, relevant to characterize the non-linear clearance due to mAb-target interaction.
With model reduction techniques, we reduce the dimensionality of the simplified PBPK model to design 2-compartment models, thus guiding classical model development with physiological and mechanistic interpretation of the PK parameters. We finally derive a new scaling approach for anatomical and physiological parameters in PBPK models that translates the inter-individual variability into the design of mechanistic covariate models with direct link to classical compartment models, specially useful for PK population analysis during clinical development.
This article aims at the statistical assessment of time series with large fluctuations in short time, which are assumed to stem from a continuous process perturbed by a Lévy process exhibiting a heavy tail behavior. We propose an easily implementable procedure to estimate efficiently the statistical difference between the noisy behavior of the data and a given reference jump measure in terms of so-called coupling distances. After a short introduction to Lévy processes and coupling distances we recall basic statistical approximation results and derive rates of convergence. In the sequel the procedure is elaborated in detail in an abstract setting and eventually applied in a case study to simulated and paleoclimate data. It indicates the dominant presence of a non-stable heavy-tailed jump Lévy component for some tail index greater than 2.
The Great Hungarian Plain was a crossroads of cultural transformations that have shaped European prehistory. Here we analyse a 5,000-year transect of human genomes, sampled from petrous bones giving consistently excellent endogenous DNA yields, from 13 Hungarian Neolithic, Copper, Bronze and Iron Age burials including two to high (similar to 22x) and seven to similar to 1x coverage, to investigate the impact of these on Europe's genetic landscape. These data suggest genomic shifts with the advent of the Neolithic, Bronze and Iron Ages, with interleaved periods of genome stability. The earliest Neolithic context genome shows a European hunter-gatherer genetic signature and a restricted ancestral population size, suggesting direct contact between cultures after the arrival of the first farmers into Europe. The latest, Iron Age, sample reveals an eastern genomic influence concordant with introduced Steppe burial rites. We observe transition towards lighter pigmentation and surprisingly, no Neolithic presence of lactase persistence.
Aus dem Inhalt: - Die Bildungsartikel und ihre praktische Umsetzung in der Frauenrechtskonvention, der Kinderrechtskonvention und der Behindertenrechtskonvention der Vereinten Nationen - Binnenvertreibung als Angelegenheit des Völkerrechts - Die Durchsetzung von Menschenrechten vor US-Gerichten nach dem Kiobel-Urteil
Gleichstellungspolitik als „Querschnittspolitik“ ist eine der oft genannten politische Metapher unserer Zeit. Ob in der Arbeitsmarkt-, Steuer-, Familien- oder Bildungspolitik, Gleichstellung ist in allen diesen Bereichen von hoher Relevanz. Der „Querschnittscharakter“ der Gleichstellungspolitik trägt dazu bei, dass in diesem Politikbereich viele unterschiedliche Akteure mit ebenso unterschiedlichen Handlungslogiken sowie Zielen aufeinandertreffen. Um Gleichstellungprogramme planen und darüber Gleichstellungspolitiken gestalten zu können, müssen die Handlungen dieser Akteure koordiniert werden.
In dieser Arbeit wird unter Verwendung des Governance-Ansatzes der Frage nachgegangen, wie die Handlungskoordination zwischen unterschiedlichen Akteuren im System der deutschen Gleichstellungspolitik erfolgt. Analysiert und rekonstruiert werden die gleichstellungspolitischen Entwicklungen in der BRD seit den 1990er Jahren, anhand der Auswertung relevanter Regierungsdokumente und wissenschaftlicher Sekundärliteratur. Hierarchien, Netzwerke und Verhandlungen – Ausprägungen der Governance-Formen – stehen bei der Rekonstruktion und Analyse der Akteurskonstellationen im Mittelpunkt.
Im Ergebnis konnten im Falle Deutschlands zwei verschiedene „Gleichstellungsgovernance-Regime“ identifiziert werden. Diese sind gekennzeichnet durch die in den Regimen je dominierenden Handlugskoordinationsmechanismen der „wirtschaftlichen-Selbstkoordination“ (2001) und der „wechselseitigen-Beobachtung“ (2003-2012). Der Vergleich dieser beiden Regime zeigt, dass sie sich vor allem in Hinblick auf ihre Akteurskonstellationen unterscheiden. In ihnen herrschen je eigene Handlungslogiken und als Folge daraus unterschiedliche gleichstellungspolitische Ergebnisse.
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer–obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer–obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer–crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
Metabolic systems tend to exhibit steady states that can be measured in terms of their concentrations and fluxes. These measurements can be regarded as a phenotypic representation of all the complex interactions and regulatory mechanisms taking place in the underlying metabolic network. Such interactions determine the system's response to external perturbations and are responsible, for example, for its asymptotic stability or for oscillatory trajectories around the steady state. However, determining these perturbation responses in the absence of fully specified kinetic models remains an important challenge of computational systems biology. Structural kinetic modeling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a parameterised representation of the system's Jacobian matrix in which the model parameters encode information about the enzyme-metabolite interactions. Stability criteria can be derived by generating a large number of structural kinetic models (SK-models) with randomly sampled parameter sets and evaluating the resulting Jacobian matrices. The parameter space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Because the sampled parameters are equivalent to the elasticities used in metabolic control analysis (MCA), the results are easy to interpret biologically. In this project, the SKM framework was extended by several novel methodological improvements. These improvements were evaluated in a simulation study using a set of small example pathways with simple Michaelis Menten rate laws. Afterwards, a detailed analysis of the dynamic properties of the neuronal TCA cycle was performed in order to demonstrate how the new insights obtained in this work could be used for the study of complex metabolic systems. The first improvement was achieved by examining the biological feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, the findings showed that the majority of sampled SK-models would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion was formulated that mitigates such infeasible models and the application of this criterion changed the conclusions of the SKM experiment. The second improvement of this work was the application of supervised machine-learning approaches in order to analyse SKM experiments. So far, SKM experiments have focused on the detection of individual enzymes to identify single reactions important for maintaining the stability or oscillatory trajectories. In this work, this approach was extended by demonstrating how SKM enables the detection of ensembles of enzymes or metabolites that act together in an orchestrated manner to coordinate the pathways response to perturbations. In doing so, stable and unstable states served as class labels, and classifiers were trained to detect elasticity regions associated with stability and instability. Classification was performed using decision trees and relevance vector machines (RVMs). The decision trees produced good classification accuracy in terms of model bias and generalizability. RVMs outperformed decision trees when applied to small models, but encountered severe problems when applied to larger systems because of their high runtime requirements. The decision tree rulesets were analysed statistically and individually in order to explore the role of individual enzymes or metabolites in controlling the system's trajectories around steady states. The third improvement of this work was the establishment of a relationship between the SKM framework and the related field of MCA. In particular, it was shown how the sampled elasticities could be converted to flux control coefficients, which were then investigated for their predictive information content in classifier training. After evaluation on the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle with respect to their intrinsic mechanisms responsible for stability or instability. The findings showed that several elasticities were jointly coordinated to control stability and that the main source for potential instabilities were mutations in the enzyme alpha-ketoglutarate dehydrogenase.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
Background: Cross-sectional studies detected associations between physical fitness, living area, and sports participation in children. Yet, their scientific value is limited because the identification of cause-and-effect relationships is not possible. In a longitudinal approach, we examined the effects of living area and sports club participation on physical fitness development in primary school children from classes 3 to 6.
Methods: One-hundred and seventy-two children (age: 9-12 years; sex: 69 girls, 103 boys) were tested for their physical fitness (i.e., endurance [9-min run], speed [50-m sprint], lower- [triple hop] and upper-extremity muscle strength [1-kg ball push], flexibility [stand-and-reach], and coordination [star coordination run]). Living area (i.e., urban or rural) and sports club participation were assessed using parent questionnaire.
Results: Over the 4 year study period, urban compared to rural children showed significantly better performance development for upper- (p = 0.009, ES = 0.16) and lower-extremity strength (p < 0.001, ES = 0.22). Further, significantly better performance development were found for endurance (p = 0.08, ES = 0.19) and lower-extremity strength (p = 0.024, ES = 0.23) for children continuously participating in sports clubs compared to their non-participating peers.
Conclusions: Our findings suggest that sport club programs with appealing arrangements appear to represent a good means to promote physical fitness in children living in rural areas.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
Molecular motors pulling cargos in the viscoelastic cytosol: how power strokes beat subdiffusion
(2014)
The discovery of anomalous diffusion of larger biopolymers and submicron tracers such as endogenous granules, organelles, or virus capsids in living cells, attributed to the viscoelastic nature of the cytoplasm, provokes the question whether this complex environment equally impacts the active intracellular transport of submicron cargos by molecular motors such as kinesins: does the passive anomalous diffusion of free cargo always imply its anomalously slow active transport by motors, the mean transport distance along microtubule growing sublinearly rather than linearly in time? Here we analyze this question within the widely used two-state Brownian ratchet model of kinesin motors based on the continuous-state diffusion along microtubules driven by a flashing binding potential, where the cargo particle is elastically attached to the motor. Depending on the cargo size, the loading force, the amplitude of the binding potential, the turnover frequency of the molecular motor enzyme, and the linker stiffness we demonstrate that the motor transport may turn out either normal or anomalous, as indeed measured experimentally. We show how a highly efficient normal active transport mediated by motors may emerge despite the passive anomalous diffusion of the cargo, and study the intricate effects of the elastic linker. Under different, well specified conditions the microtubule-based motor transport becomes anomalously slow and thus significantly less efficient.
In today’s life, embedded systems are ubiquitous. But they differ from traditional desktop systems in many aspects – these include predictable timing behavior (real-time), the management of scarce resources (memory, network), reliable communication protocols, energy management, special purpose user-interfaces (headless operation), system configuration, programming languages (to support software/hardware co-design), and modeling techniques. Within this technical report, authors present results from the lecture “Operating Systems for Embedded Computing” that has been offered by the “Operating Systems and Middleware” group at HPI in Winter term 2013/14. Focus of the lecture and accompanying projects was on principles of real-time computing. Students had the chance to gather practical experience with a number of different OSes and applications and present experiences with near-hardware programming. Projects address the entire spectrum, from bare-metal programming to harnessing a real-time OS to exercising the full software/hardware co-design cycle. Three outstanding projects are at the heart of this technical report. Project 1 focuses on the development of a bare-metal operating system for LEGO Mindstorms EV3. While still a toy, it comes with a powerful ARM processor, 64 MB of main memory, standard interfaces, such as Bluetooth and network protocol stacks. EV3 runs a version of 1 1 Introduction Linux. Sources are available from Lego’s web site. However, many devices and their driver software are proprietary and not well documented. Developing a new, bare-metal OS for the EV3 requires an understanding of the EV3 boot process. Since no standard input/output devices are available, initial debugging steps are tedious. After managing these initial steps, the project was able to adapt device drivers for a few Lego devices to an extent that a demonstrator (the Segway application) could be successfully run on the new OS. Project 2 looks at the EV3 from a different angle. The EV3 is running a pretty decent version of Linux- in principle, the RT_PREEMPT patch can turn any Linux system into a real-time OS by modifying the behavior of a number of synchronization constructs at the heart of the OS. Priority inversion is a problem that is solved by protocols such as priority inheritance or priority ceiling. Real-time OSes implement at least one of the protocols. The central idea of the project was the comparison of non-real-time and real-time variants of Linux on the EV3 hardware. A task set that showed effects of priority inversion on standard EV3 Linux would operate flawlessly on the Linux version with the RT_PREEMPT-patch applied. If only patching Lego’s version of Linux was that easy... Project 3 takes the notion of real-time computing more seriously. The application scenario was centered around our Carrera Digital 132 racetrack. Obtaining position information from the track, controlling individual cars, detecting and modifying the Carrera Digital protocol required design and implementation of custom controller hardware. What to implement in hardware, firmware, and what to implement in application software – this was the central question addressed by the project.
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this thesis I developed a picking free automated method based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. I used the STA/LTA of several characteristic functions in order to increase the sensitiveness to the P wave and the S waves. For the P phases we use the STA/LTA traces of the vertical energy function, while for the S phases, we use the STA/LTA traces of the horizontal energy trace and then a more optimized characteristic function which is obtained using the principal component analysis technique. The orientation of the horizontal components can be retrieved by robust and linear approach of waveform comparison between stations within a network using seismic sources outside the network (chapter 2). To locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial and temporal coordinates of the seismic event. Location uncertainties are then estimated by perturbing the STA/LTA parameters (i.e the length of both long and short time windows) and relocating each event several times. In order to test the location method I firstly applied it to a set of 200 synthetic events. Then we applied it to two different real datasets. A first one related to mining induced microseismicity in a coal mine in the northern Germany (chapter 3). In this case we successfully located 391 microseismic event with magnitude range between 0.5 and 2.0 Ml. To further validate the location method I compared the retrieved locations with those obtained by manual picking procedure. The second dataset consist in a pilot application performed in the Campania-Lucania region (southern Italy) using a 33 stations seismic network (Irpinia Seismic Network) with an aperture of about 150 km (chapter 4). We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < Ml < 2.7. A subset of these locations were compared with accurate locations retrieved by a manual location procedure based on the use of a double difference technique. In both cases results indicate good agreement with manual locations. Moreover, the waveform stacking location method results noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
התזות של העבודה
תקופת חז"ל
1. איסור הנישואין מתייחס לשתי נשים – מעוברת ומנקת, ובהתאם לסוגיה הסתמאית בבבלי יבמות, מדובר באיסור שטעמו אחד. למרות תפיסה מקובלת זו, להערכתנו בהלכה הקדומה מדובר היה בשני איסורים שונים: איסור נישואי מעוברת חברו ואיסור נישואי מנקת שמת בעלה.
למסקנה זו הגענו כתוצאה משורה של ראיות:
א. הסוגיה הסתמאית במסכת יבמות מעוררת קשיים רבים (אשר פורטו בעמ' לעבודתנו). קשיים אלה והעובדה כי במחקר המודרני נהוג לשייך סוגיות סתמאיות לסוף תקופת האמוראים ואף לאחר מכן, הביאו אותנו לשער כי סוגיה זו משקפת הצגה מאוחרת של טעם הדין. טעם שונה של הדין מצוי במקורות תנאיים, והוא המשקף כנראה את טעמו הקדום של הדין .
ב. שניים מתוך שלושת המקורות התנאיים, עוסקים האחד רק במנקת והשני רק במעוברת, ולא בשתי הנשים יחד.
ג. ניתוח הלשון של המקורות, מצביע על שורה של שינויים משמעותיים, אשר יש בהם כדי ללמד כי מדובר באיסורים שונים.
2. בהלכה הקדומה נתפס איסור נישואי "מעוברת חברו" כאיסור חמור, והוא הוסמך לפסוקים מהתנ"ך שאסרו הסגת גבול. בשלב מאוחר יותר בתקופת התנאים האיסור פורש ככזה שנועד להבטיח את הנקתו של הילד, צורפה לאיסור הקביעה כי אין לשאת אלמנה מנקת, והיחס לאיסור היה מקל יותר.
למסקנה זו הגענו כתוצאה מהדברים הבאים:
א. בניגוד למקורות תנאיים שנקטו בגישה מקלה במגוון עניינים, ישנם מספר מקורות המציגים גישה מחמירה – לפיה מי שעבר ונשא אלמנה בניגוד לאיסור, "יוציא ולא יחזיר עולמית".
ב. משורה של מקורות שעסקו בדין סוטה, ניכר כי העמדה ההלכתית הקדומה נקטה בגישה לפיה מי שעבר ונשא אלמנה בניגוד לדין והוא חושד כי אשתו סטתה, אינו משקה אותה מי סוטה, שכן היא אסורה עליו עולמית. גישה מקלה יותר, לפיה יכול הוא להפרישה ולהחזירה לאחר זמן ולכן יש טעם בהשקאת הסוטה במקרה כגון זה, התגבשה בספרות התנאית רק בשלב מאוחר יותר. במקביל וללא קשר לדיני סוטה, התפתחה אצל התנאים גישה מקלה, לפיה ישנם חריגים משמעותיים לאיסור נישואי האלמנה (כגון: כאשר הילד נגמל; כאשר הילד נמסר למנקת).
ג. מקור תנאי שעסק רק במעוברת חברו ולא במנקת שמת בעלה, נקט בגישה המחמירה ("יוציא ולא יחזיר עולמית") ולא הזכיר כלל קיומה של גישה מקלה ("יכול הוא להפרישה ולהחזירה לאחר זמן".
ד. מקורות תנאיים בספרות ארץ-ישראלית (מדרש הלכה, התוספתא וברייתא בירושלמי), ראו במי שעבר ונשא כמי שעבר על איסור הנמסך לפסוקים מהתנ"ך (ספר דברים וספר משלי) והאוסרים הסגת גבול. הצענו להבין את הסגת הגבול כמתארת מסגרת שייכות, ולפיה אלמנה שייכת לבעלה המנוח וזאת בכל הקושר לחובתה להבטחת זרעו.
ה. חוקרים (כגון י' גילת) הצביעו על תופעה המאפיינת את המעבר מההלכה הקדומה אל משנתם של התנאים לאחר החורבן: מעבר מגישה מחמירה אשר אינה מכירה בהבחנה בין דין שמעמדו מדאורייתא לדין שמעמדו מדרבנן, אל גישה מקלה יותר המשתלבת עם הנמקה שונה של האיסורים הקדומים והעמדתם במעמד של דרבנן בלבד.
ו. אנו משערים, למרות שהדבר אינו בר הוכחה חותכת, כי תהליך דומה פקד את הדין בו אנו עוסקים: בהלכה הקדומה נאסרה מעוברת חברו, מכוח הפסוק המקראי האוסר על הסגת גבול. איסור זה נחשב כאיסור מהתורה, ולכן היה בו כדי לדחות את מצוות השקאת הסוטה. בשלב מאוחר יותר, כאשר המוסג השגת גבול בכל הנוגע לאלמנה נראה פחות מתאים (בין השאר, הואיל ואלמנה "קונה עצמה" במיתת הבעל – כדברי המשנה בקידושין), הוצע טעם חדש לאיסור – הפגיעה האפשרית בחלב האם. לאור הנמקה זו, חבר אל איסור נישואי מעוברת חברו, גם האיסור לשאת מנקת שמת בעלה, והאיסור בכללותו נתפס בקרב התנאים כאיסור מדרבנן בלבד ושחומרתו קלה יותר. בשלב זה, הפכה הדעה לפיה מי שעבר ונשא "יוציא ולא יחזיר עולמית" לדעת יחיד בלבד(דעת ר' מאיר), ואילו הרבים קבעו כי "יכול הוא להפרישה ולהחזירה לאחר זמן".
3. בתום תקופת האמוראים ואולי אף בתקופה הסבוראית, הוחלט לאמץ את העמדות המחמירות יותר בכל הנוגע לאיסור הנישואין. נקבע כי מלבד מקרה של מות הילד, אין לאיסור חריגים, והאיסור חל גם כאשר הילד כבר אינו יונק. כמו כן נקבע כי משך ההמתנה הנדרש הוא 24 חודשים מהמועד בו נולד הילד. קביעות אלה משקפות הכרעה לאמץ מגמה מחמירה, מתוך מכלול הדעות אשר אפשרו גם אימוץ גישה מקלה יותר.
כפי שהראנו בעבודתנו , ההכרעה לכיוון המחמיר התקבלה ככל הנראה בתקופה הסבוראית. הכרעה זו אינה מובנת מאליה, שכן היו מקורות תנאיים ואמוראיים על בסיסם ניתן היה לקבוע כי יש לאמץ גישה מקלה יותר. לדוגמא, דעת בית הלל ודעת ר' יהודה הייתה כי משך האיסור הוא 18 חודשים בלבד. דעת רשב"ג הייתה כי ניתן לקצר את האיסור בשלושה חודשים נוספים, כך באיסור יהיה למשך 15 חודשים בלבד. לאור זאת, קביעת ההלכתא הסבוראית כי משך ההמתנה הוא 24 חודשים – אינה מובנת מאליה. למרות שאין ביכולתנו להצביע בוודאות על הסבר מדוע אומצה הגישה המחמירה, בעבודתנו הצבענו על ההחמרה במצב היהודים בתקופה הרלבנטית, כרקע אשר עשוי להשתלב עם אימוץ הגישה המחמירה .
תקופת הגאונים
4. בתקופת הגאונים נשמרו חילוקי הדעות בין בני בבל לבני ארץ ישראל, אשר מקורם בהבדלים בין התלמוד הבבלי לתלמוד הירושלמי. הבדלים אלה נוגעים לשאלה מה הדין במקרה בו מת הילד, ולטענתנו גם אודות השאלה האם מי שעבר ונשא יחויב לתת לאשתו גט או שדי בהפרשה בלבד. בסופו של דבר, גאוני בבל הביאו לכך שעמדת הבבלי היא שהתקבלה בעניינים אלה בקרב פוסקי ההלכה.
בעבודתנו עמדנו על היתר שחודש בתקופת הגאונים, ועל שלבים משעורים בהתפתחותו.
תקופת הראשונים
5. בעבודתנו עמדנו על המגמה שאפיינה את פסיקת ההלכה בדין בו אנו עוסקים בתקופת הראשונים. ניסינו להסביר מדוע נעשו ניסיונות לגבש הקלות באיסור הנישואין, ומדוע בסופו של דבר, לדעת רוב הראשונים וכפי שההלכה סוכמה בשלחן ערוך, הגישות המחמירות הן שאומצו. על עניינים אלו.
6. הרחבת היקף פריסתו של האיסור: בניגוד למקורות מתקופת חז"ל מהם נראה כי האיסור חל רק על אלמנה, בתקופת הראשונים נקבע כי האיסור חל גם על גרושה שיש לה ילד מנישואיה הקודמים, וגם על רווקה שילדה ילד מחוץ למסגרת הנישואין. קביעות אלה הביאו להרחבה ניכרת של המקרים בהם חל האיסור.
בעבודתנו עמדנו על הגישות השונות שהיו בסוגיה זו: אודות גרושה אודות מי שילדה בזנות .
עמדנו על כך שישנה קירבה בין שני הנימוקים השונים שהוזכרו בראשית דרכה של ההלכה על טעם האיסור, לבין המחלוקת בין הדעות השונות בשאלת תחולת האיסור על גרושה ועל מי שילדה בזנות. בהתאם לגישה שראתה באיסור משום הסגת גבולו של הבעל הראשון, אין מקום להחלת האיסור כאשר הנישואין הסתיימו בגירושין או כאשר הילד נולד מבלי שהיה קשר נישואין בין הוריו. מנגד, בהתאם לטעם שפורט בבבלי ואשר התמקד בנזק שעלול להיגרם לילד, חשש זה רלבנטי ללא קשר לטיב היחסים בין ההורים ומשכך יש מקום להחיל את איסור הנישואין גם על גרושה וגם על מי שילדה בזנות.
7. פרשיית ר' יעקב מקרקוב והשפעתה על עמדת חכמי ספרד: ר' יעקב מקרקוב סבר כי ניתן לייצר חריגים לאיסור, וזאת על-ידי מסירת הילד למנקת בשכר ועיצוב מנגנון אשר יבטיח שהמנקת לא תפר את התחייבותה להניק את הילד. עמדה זו נדחתה בחריפות על-ידי חכמי אשכנז, אולם נראה כי היא הייתה מקובלת על הרשב"א. כאשר הרא"ש היגר מאשכנז לספרד, הוא הוביל שינוי בתפיסה הספרדית ואימוץ הגישה המחמירה שמקורה באשכנז . העמדות המקלות בסוגיה זו, ככל הנראה צונזרו באופן כה יעיל, עד כי נראה שר' יוסף קארו מחבר השלחן ערוך, לא היה מודע לקיומן.
8. יחסו של הרא"ש להיתר שקבעו הגאונים: עמדנו בפירוט על תהליך העולה ממכלול יצירותיו של הרא"ש: דחיה של היתר שיוחס לגאונים; לאחר מכן אימוץ מהוסס בשעת הדחק של ההיתר; ובסופו של דבר אימוץ היתר רחב יותר העומד בפני עצמו ואינו נשען על משנתם של הגאונים. בהמשך, עמדנו על האופן בו התייחסו חכמי ספרד להיתר הרחב שקבע הרא"ש.
סוגיה זו ממחישה את יחסם השונה של חכמי ספרד וחכמי אשכנז אל תורתם של גאוני בבל, וכן היחס השונה של חכמי ספרד וחכמי אשכנז ללגיטימיות החדשנות ההלכתית. בסוגיה זו, בניגוד לסוגיה שנדונה בסעיף הקודם, הרא"ש גילה גמישות, והסכים להתקרב לעמדת חכמי ספרד (על השערתנו אודות ההסבר לכך).
העת החדשה
9. בעת החדשה, בעקבות קיצור משך ההנקה המקובל, נוצרו שני זרמים מרכזיים ביחס אל איסור הנישואין: הגישה המחמירה מבית מדרשו של החת"ם סופר, והגישה המקלה. הגישה המחמירה מבית מדרשו של החת"ם סופר, הסכימה להתיר את איסור הנישואין, רק כאשר היה חשש שאם איסור הנישואין לא יבוטל, הילד עלול לצאת ממסגרת החיים הדתיים. גישתו המחמירה של החת"ם סופר, הושפעה להערכתנו מהעובדה שאחד מראשוני תנועת הרפורמה, ר' אהרן חורין, טען כי יש לבטל את איסור הנישואין . על-מנת לחזק את תוקפו של האיסור, תלמידו של החת"ם סופר – המהר"ם שיק, טען כי מדובר באיסור מהתורה.
10. הגישה המקלה: גישה זו מורכבת מפסקי הלכה של פוסקים שונים, אשר הסכימו לאמץ הקלות באיסור הנישואין (לפירוט ההקלות העיקריות). הקלות אלה משקפות את גמישותו הרבה של הטקסט המתפרש. אותם טקסטים עצמם שהובנו בעבר באופן מחמיר, פורשו לפתע באופן מקל הרבה יותר. אנו מעריכים כי שינוי הפרשנות נבע משינוי המציאות הסובבת.
11. גם בעלי הגישה המקלה, הושפעו מעמדותיו של החת"ם סופר. השפעה זו באה לידי ביטוי בכך שבעלי הגישה המקלה היו מוכנים לאמץ גישות מקלות שנדחו על-ידי פוסקי ההלכה בעבר, אולם בדרך כלל הם לא היו מוכנים להכיר בצורך בשינוי דרמטי במעמדו של הדין, לאור שינוי המציאות. להערכתנו, עמדה שמרנית זו מובילה לתוצאות בלתי ראויות, והיא אינה משקפת בחינה אמיצה של מידת הרלבנטיות של איסור הנישואין למציאות החיים היהודיים בעשורים האחרונים.
Effect of benzylglucosinolate on signaling pathways associated with type 2 diabetes prevention
(2014)
Type 2 diabetes (T2D) is a health problem throughout the world. In 2010, there were nearly 230 million individuals with diabetes worldwide and it is estimated that in the economically advanced countries the cases will increase about 50% in the next twenty years. Insulin resistance is one of major features in T2D, which is also a risk factor for metabolic and cardiovascular complications. Epidemiological and animal studies have shown that the consumption of vegetables and fruits can delay or prevent the development of the disease, although the underlying mechanisms of these effects are still unclear. Brassica species such as broccoli (Brassica oleracea var. italica) and nasturtium (Tropaeolum majus) possess high content of bioactive phytochemicals, e.g. nitrogen sulfur compounds (glucosinolates and isothiocyanates) and polyphenols largely associated with the prevention of cancer. Isothiocyanates (ITCs) display their anti-carcinogenic potential by inducing detoxicating phase II enzymes and increasing glutathione (GSH) levels in tissues. In T2D diabetes an increase in gluconeogenesis and triglyceride synthesis, and a reduction in fatty acid oxidation accompanied by the presence of reactive oxygen species (ROS) are observed; altogether is the result of an inappropriate response to insulin. Forkhead box O (FOXO) transcription factors play a crucial role in the regulation of insulin effects on gene expression and metabolism, and alterations in FOXO function could contribute to metabolic disorders in diabetes. In this study using stably transfected human osteosarcoma cells (U-2 OS) with constitutive expression of FOXO1 protein labeled with GFP (green fluorescent protein) and human hepatoma cells HepG2 cell cultures, the ability of benzylisothiocyanate (BITC) deriving from benzylglucosinolate, extracted from nasturtium to modulate, i) the insulin-signaling pathway, ii) the intracellular localization of FOXO1 and iii) the expression of proteins involved in glucose metabolism, ROS detoxification, cell cycle arrest and DNA repair was evaluated. BITC promoted oxidative stress and in response to that induced FOXO1 translocation from cytoplasm into the nucleus antagonizing the insulin effect. BITC stimulus was able to down-regulate gluconeogenic enzymes, which can be considered as an anti-diabetic effect; to promote antioxidant resistance expressed by the up-regulation in manganese superoxide dismutase (MnSOD) and detoxification enzymes; to modulate autophagy by induction of BECLIN1 and down-regulation of the mammalian target of rapamycin complex 1 (mTORC1) pathway; and to promote cell cycle arrest and DNA damage repair by up-regulation of the cyclin-dependent kinase inhibitor (p21CIP) and Growth Arrest / DNA Damage Repair (GADD45). Except for the nuclear factor (erythroid derived)-like2 (NRF2) and its influence in the detoxification enzymes gene expression, all the observed effects were independent from FOXO1, protein kinase B (AKT/PKB) and NAD-dependent deacetylase sirtuin-1 (SIRT1). The current study provides evidence that besides of the anticarcinogenic potential, isothiocyanates might have a role in T2D prevention. BITC stimulus mimics the fasting state, in which insulin signaling is not triggered and FOXO proteins remain in the nucleus modulating gene expression of their target genes, with the advantage of a down-regulation of gluconeogenesis instead of its increase. These effects suggest that BITC might be considered as a promising substance in the prevention or treatment of T2D, therefore the factors behind of its modulatory effects need further investigation.
In the Posthomerica references to an omnipotent fate or to the power of the gods are strikingly frequent. Modern scholarship has often treated this as Stoic. Closer reading reveals that Quintus is, on the one hand, following the Homeric concept of double motivation, according to which humans can be motivated by a deity only to an act that conforms to their character and for which they are responsible. On the other hand, Quintus gives these statements on responsibility to characters who are trying to excuse their own acts to themselves and, particularly, to others, i.e. they are motivated contextually. It would be non-Stoic to excuse oneself for a bad deed by reference to an almighty fate. It seems that Quintus, by presenting this tension, wanted the reader to reconsider and reflect
on the different concepts.
Der vorliegende Text gibt eine Bestandserhebung der bisher stattgefundenen Aktivitäten im E-Learning an der Universität Potsdam wieder, andererseits dient er auch dazu, Potenziale zu sichten und in einem nächstem Schritt daraus Ideen und Vorschläge für eine hochschulweite E-Learning-Strategie abzuleiten. Zielsetzung der Bestandsaufnahme ist es, die relevanten Informationen darzustellen, über den Platz der Universität Potsdam in der hochschulischen E-Learning-Landschaft zu orientieren und den Stand der Entwicklung zu bewerten.
Portal Wissen = Believe
(2014)
People want to know what is real. Children enjoy listening to a story but when my children were about four years old they started asking whether the story really happened or was just invented. Likewise, only on a higher level, our academic curiosity is fuelled by our interest in knowing what is real. When we analyze poetic texts or dreams we are trying to distinguish between the facts (e.g. neurological ones or linguistic structures) and merely assumed influences. Ideally we can present results that were logically understood by others and that we can repeat empirically. But in most cases this is not possible. We cannot read every book and cannot look through every microscope, not even within our own discipline. In the world we live in we depend on trusting the information of others, like how to get to the train station or what the weather is like in Ulaanbataar. This is why we are used to believing others, our friends or the news anchors. This is not a childish behavior but a necessity. Of course, it is risky because they could all be lying to us, like in a Truman Show situation. The only time we are able to know that we are in reality is when we transcend our selfconsciousness and when we accept two propositions: first, that we are not only objects but also subjects in the consciousness of others and second that our dialogic relations are again observed by a third party that is not part of this intersubjective world.
For religious people this is “belief” - belief as the assumption that all human relations only become real, serious and beyond any doubt if they know they are under the eyes of God. Only before Him something is in itself and not only “for me” or “among us”. That is why biblical language distinguishes between three forms of belief: the relationship with the world of things (“to believe that”), the relationship to the world of subjects (“to believe somebody”) and the assumption of a subjective supernatural reality (“to believe in” or “faith”). From an academic point of view belief is a holistic hypothesis. Belief is not the opposite of knowledge but it is the attempt to save reality from doubt by comprehending the fragile empirical world as an expression of a stable transcendent world.
When I talk to students they often ask not only about what I know but what I believe. As a professor for Religious Studies and a believing Catholic I am caught in the middle. On the one hand, it is my duty as a professor to doubt everything, i.e. to attribute each religious text to its historical context and sociological functions. On the other hand, I, as a Christian, consider certain religious documents, in my case the Bible, an interpretable but nevertheless irreversible, revealed text about the origin of reality. On weekdays the New Testament is a collection of ancient writings among many others, on Sundays it is the revelation. You can make a clear distinction between these two perspectives but it is difficult to decide whether doubt or belief is more real.
This issue of “Portal Wissen” explores this dual relationship of belief. What is the attitude of science towards belief – is it a religious one? Where does science bring things to light that we can hardly believe or that make us believe (again)? What happens if research clears up erroneous assumptions or myths? Is science able to investigate things that are convincing but inexplicable? How can it maintain its credibility and develop even so?
These questions appear again and again in the contributions of this “Portal Wissen”. They form a manifold, exciting and surprising picture of the research projects and academics at the University of Potsdam. Believe me, it will be an enjoyable read.
Prof. Johann Hafner
Professor of Religious Studies with Focus on Christianity Dean of the Faculty of Arts
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
Masked priming research with late (non-native) bilinguals has reported facilitation effects following morphologically derived prime words (scanner - scan). However, unlike for native speakers, there are suggestions that purely orthographic prime-target overlap (scandal - scan) also produces priming in non-native visual word recognition. Our study directly compares orthographically related and derived prime-target pairs. While native readers showed morphological but not formal overlap priming, the two prime types yielded the same magnitudes of facilitation for non-natives. We argue that early word recognition processes in a non-native language are more influenced by surface-form properties than in one's native language.
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
claspfolio 2
(2014)
Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp to generate static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a single, unified framework. Taking advantage of this, we conducted an extensive experimental study to assess the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the utility of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration and the earlier version of claspfolio, but also over manually tuned configurations of clasp.