Refine
Has Fulltext
- yes (263) (remove)
Year of publication
- 2014 (263) (remove)
Document Type
- Postprint (102)
- Doctoral Thesis (89)
- Monograph/Edited Volume (22)
- Part of Periodical (13)
- Preprint (13)
- Article (11)
- Master's Thesis (5)
- Bachelor Thesis (3)
- Conference Proceeding (3)
- Habilitation Thesis (2)
Is part of the Bibliography
- yes (263) (remove)
Keywords
- prevention (8)
- Gewalt (6)
- Kriminalität (6)
- Nachhaltigkeit (6)
- Prävention (6)
- Rechtsextremismus (6)
- crime (6)
- right-wing extremism (6)
- sustainability (6)
- violence (6)
Institute
- Institut für Chemie (41)
- Institut für Physik und Astronomie (24)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Humanwissenschaftliche Fakultät (19)
- Institut für Biochemie und Biologie (19)
- Institut für Mathematik (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (15)
- Institut für Geowissenschaften (12)
- Referat für Presse- und Öffentlichkeitsarbeit (8)
- Sozialwissenschaften (8)
Ökonomen wie Wirtschaftspolitiker berufen sich auf die Neutralitätstheorie des Geldes, wenn sie eine Entpolitisierung der Geldpolitik fordern. Sowohl die Theorie der Geldneutralität als auch das Paradigma der Entpolitisierung der Geldpolitik sind jedoch problematisch. Die politökonomischen Entwicklungen nach der globalen Finanz- und Wirtschaftskrise 2007/2008 und die jüngsten Kontroversen über die Rolle und Bedeutung von Geld haben dies deutlich vor Augen geführt. Die vorliegende Arbeit diskutiert zunächst die konzeptionellen Grundlagen und theoretischen Modelle der Geldneutralität. Anschließend werden die zentralen theoretischen Annahmen und Aussagen der Neutralitätstheorie aus einer kritischen heterodoxen Perspektive hinterfragt. Es wird argumentiert, dass Geld eine nicht-neutrale Produktionskraft ist, die weder ökonomisch noch sozial neutral ist. Die Bedingungen, unter denen Geld verfügbar ist und zirkuliert, sind richtungsweisend für die ökonomische Entwicklung. Daher kann es auch kein neutrales Geld oder gar eine apolitische Geldpolitik geben.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Im Rahmen eines interdisziplinären studentischen Projekts wurde ein Framework für mobile pervasive Lernspiele entwickelt. Am Beispiel des historischen Lernortes Park Sanssouci wurde auf dieser Grundlage ein Lernspiel für Schülerinnen und Schüler implementiert. Die geplante Evaluation soll die Lernwirksamkeit von geobasierten mobilen Lernspielen messen. Dazu wird die Intensität des Flow-Erlebens mit einer ortsgebundenen alternativen Umsetzung verglichen.
Portal Wissen = Glauben
(2014)
Menschen wollen wissen, was wirklich ist. Kinder lassen sich gern eine Geschichte erzählen, aber spätestens mit vier Jahren fragten meine, ob diese Geschichte so passiert sei oder nur erfunden. Das setzt sich fort: Auch unsere wissenschaftliche Neugier wird vom Interesse befeuert herauszufinden, was wirklich ist. Selbst dort, wo wir poetische Texte oder Träume erforschen, tun wir es in der Absicht, die realen sprachlichen Strukturen bzw. die neurologischen Faktoren von bloß vermuteten zu unterscheiden. Im Idealfall können wir Ergebnisse präsentieren, die von anderen logisch nachvollzogen und empirisch wiederholbar sind. Meistens geht das aber nicht. Wir können nicht jedes Buch lesen und nicht in jedes Mikroskop schauen, nicht einmal innerhalb der eigenen Disziplin. Wie viel mehr sind wir in der Lebenswelt darauf angewiesen, den Ausführungen anderer zu vertrauen, wenn wir wissen wollen, wo es zum Bahnhof geht oder ob es in Ulan Bator schön ist. Deshalb haben wir uns daran gewöhnt, anderen Glauben zu schenken, vom Freund bis zum Tagesschausprecher. Das ist kein kindliches Verhalten, sondern eine Notwendigkeit. Freilich ist das riskant, denn alle anderen könnten uns – wie in der „Truman- Show“ – anlügen. In der Wirklichkeit wissen wir uns erst dann, wenn wir unser Selbstbewusstsein verlassen und akzeptieren, dass wir erstens nicht nur Objekte, sondern Subjekte im Bewusstsein von anderen sind, und zweitens, dass alle unsere dialogischen Beziehungen noch einmal von einem Dritten betrachtet werden, der nicht Teil dieser Welt ist.
Für Religiöse ist das der Glaube. Glaube als Unterstellung, dass alle menschlichen Beziehungen erst dann wirklich, ernst und über Zweifel erhaben sind, wenn sie sich vor den Augen Gottes wissen. Erst vor ihm ist etwas als es selbst und nicht nur „für mich“ oder „unter uns“. Daher unterscheidet die biblische Sprache drei Formen des Glaubens: die Beziehung zur Ding-Welt („glauben, dass“), die Beziehung zur Subjekt-Welt („jemandem glauben“) und die Annahme einer subjekthaften überirdischen Wirklichkeit („glauben an“). Wissenschaftstheoretisch gesehen ist Glaube also eine Totalhypothese. Glaube ist nicht das Gegenteil von Wissen, sondern der Versuch, Wirklichkeit vor dem Zweifel zu retten, indem man die fragile empirische Welt als Ausdruck einer stabilen transzendenten Welt begreift.
Oft wollen Studierende in Gesprächen nicht nur wissen, was ich weiß, sondern, was ich glaube. Als Religionswissenschaftler und gleichzeitig gläubiger Katholik sitze ich zwischen den Stühlen: Einerseits ist es als Professor meine Aufgabe, alles zu bezweifeln, d.h. jeden religiösen Text auf seine historischen Kontexte und soziologischen Funktionen zurückzuführen. Andererseits hält der Christ in mir bestimmte religiöse Dokumente – in meinem Fall die Bibel – zwar für einen interpretierbaren, aber doch irreversiblen, offenbarten Text, der vom Ursprung der Wirklichkeit handelt. Werktags ist das Neue Testament eine antike Schriftensammlung neben vielen anderen, am Sonntag ist es die Offenbarung. Beides kann klar unterschieden werden, aber es ist schwer zu entscheiden, ob das Zweifeln oder das Glauben wirklicher ist.
Das vorliegende Heft geht diesem doppelten Verhältnis zum Glauben nach: Wie steht Wissenschaft zum Glauben – ob religiös oder nicht? Wo bringt Wissenschaft Dinge ans Licht, die wir kaum glauben mögen oder uns (wieder) glauben lassen? Was passiert, wenn Forschung irrige Annahmen oder Mythen aufklärt? Ist Wissenschaft in der Lage, Dingen auf den Grund zu gehen, die zwar überzeugend, aber unerklärbar sind? Wie kann sie selbst glaubwürdig bleiben und sich dennoch weiterentwickeln?
In den Beiträgen dieser „Portal Wissen“ scheinen diese Fragen immer wieder auf. Sie bilden ein vielfältiges, spannendes und auch überraschendes Bild der Forschungsprojekte und der Wissenschaftler an der Universität Potsdam. Glauben Sie mir, es erwartet Sie eine anregende Lektüre!
Prof. Dr. Johann Hafner
Professor für Religionswissenschaft mit dem Schwerpunkt Christentum
Dekan der Philosophischen Fakultät
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
Picosecond X-ray absorption spectroscopy (XAS) is used to investigate the electronic and structural dynamics initiated by plasmon excitation of 1.8 nm diameter Au nanoparticles (NPs) functionalised with 1-hexanethiol. We show that 100 ps after photoexcitation the transient XAS spectrum is consistent with an 8% expansion of the Au–Au bond length and a large increase in disorder associated with melting of the NPs. Recovery of the ground state occurs with a time constant of ∼1.8 ns, arising from thermalisation with the environment. Simulations reveal that the transient spectrum exhibits no signature of charge separation at 100 ps and allows us to estimate an upper limit for the quantum yield (QY) of this process to be <0.1.
Synchronization is a fundamental phenomenon in nature. It can be considered as a general property of self-sustained oscillators to adjust their rhythm in the presence of an interaction.
In this work we investigate complex regimes of synchronization phenomena by means of theoretical analysis, numerical modeling, as well as practical analysis of experimental data.
As a subject of our investigation we consider chimera state, where due to spontaneous symmetry-breaking of an initially homogeneous oscillators lattice split the system into two parts with different dynamics. Chimera state as a new synchronization phenomenon was first found in non-locally coupled oscillators system, and has attracted a lot of attention in the last decade. However, the recent studies indicate that this state is also possible in globally coupled systems. In the first part of this work, we show under which conditions the chimera-like state appears in a system of globally coupled identical oscillators with intrinsic delayed feedback. The results of the research explain how initially monostable oscillators became effectivly bistable in the presence of the coupling and create a mean field that sustain the coexistence of synchronized and desynchronized states. Also we discuss other examples, where chimera-like state appears due to frequency dependence of the phase shift in the bistable system.
In the second part, we make further investigation of this topic by modeling influence of an external periodic force to an oscillator with intrinsic delayed feedback. We made stability analysis of the synchronized state and constructed Arnold tongues. The results explain formation of the chimera-like state and hysteric behavior of the synchronization area. Also, we consider two sets of parameters of the oscillator with symmetric and asymmetric Arnold tongues, that correspond to mono- and bi-stable regimes of the oscillator.
In the third part, we demonstrate the results of the work, which was done in collaboration with our colleagues from Psychology Department of University of Potsdam. The project aimed to study the effect of the cardiac rhythm on human perception of time using synchronization analysis. From our part, we made a statistical analysis of the data obtained from the conducted experiment on free time interval reproduction task. We examined how ones heartbeat influences the time perception and searched for possible phase synchronization between heartbeat cycles and time reproduction responses. The findings support the prediction that cardiac cycles can serve as input signals, and is used for reproduction of time intervals in the range of several seconds.
We present an electrochemical MIP sensor for tamoxifen (TAM)-a nonsteroidal anti-estrogen-which is based on the electropolymerisation of an O-phenylenediamine. resorcinol mixture directly on the electrode surface in the presence of the template molecule. Up to now only. bulk. MIPs for TAM have been described in literature, which are applied for separation in chromatography columns. Electro-polymerisation of the monomers in the presence of TAM generated a film which completely suppressed the reduction of ferricyanide. Removal of the template gave a markedly increased ferricyanide signal, which was again suppressed after rebinding as expected for filling of the cavities by target binding. The decrease of the ferricyanide peak of the MIP electrode depended linearly on the TAM concentration between 1 and 100 nM. The TAM-imprinted electrode showed a 2.3 times higher recognition of the template molecule itself as compared to its metabolite 4-hydroxytamoxifen and no cross-reactivity with the anticancer drug doxorubucin was found. Measurements at + 1.1 V caused a fouling of the electrode surface, whilst pretreatment of TAM with peroxide in presence of HRP generated an oxidation product which was reducible at 0 mV, thus circumventing the polymer formation and electrochemical interferences.
The large-scale green synthesis of graphene-type two-dimensional materials is still challenging. Herein, we describe the ionothermal synthesis of carbon-based composites from fructose in the iron-containing ionic liquid 1-butyl-3-methylimidazolium tetrachloridoferrate(III), [Bmim][FeCl4] serving as solvent, catalyst, and template for product formation. The resulting composites consist of oligo-layer graphite nanoflakes and iron carbide particles. The mesoporosity, strong magnetic moment, and high specific surface area of the composites make them attractive for water purification with facile magnetic separation. Moreover, Fe3Cfree graphite can be obtained via acid etching, providing access to fairly large amounts of graphite material. The current approach is versatile and scalable, and thus opens the door to ionothermal synthesis towards the larger-scale synthesis of materials that are, although not made via a sustainable process, useful for water treatment such as the removal of organic molecules.
The synthesis of two novel types of π-expanded coumarins has been developed. Modified Knoevenagel bis-condensation afforded 3,9-dioxa-perylene-2,8-diones. Subsequent oxidative aromatic coupling or light driven electrocyclization reaction led to dibenzo-1,7-dioxacoronene-2,8-dione. Unparalleled synthetic simplicity, straightforward purification and superb optical properties have the potential to bring these perylene and coronene analogs towards various applications.
Weltweit streben Anti-Doping Institute danach jene Sportler zu überführen, welche sich unerlaubter Mittel oder Methoden bedienen. Die hierfür notwendigen Testsysteme werden kontinuierlich weiterentwickelt und neue Methoden aufgrund neuer Wirkstoffe der Pharmaindustrie etabliert. Gegenstand dieser Arbeit war es, eine parallele Mehrkomponentenanalyse auf Basis von Antigen-Antikörper Reaktionen zu entwickeln, bei dem es primär um Verringerung des benötigten Probevolumens und der Versuchszeit im Vergleich zu einem Standard Nachweis-Verfahren ging. Neben der Verwendung eines Multiplex Ansatzes und der Mikroarraytechnologie stellten ebenfalls die Genauigkeit aller Messparameter, die Stabilität des Versuchsaufbaus sowie die Performance über einen Einfach-Blind-Ansatz Herausforderungen dar. Die Anforderung an den Multiplex Ansatz, keine falschen Signale trotz ähnlicher Strukturen zu messen, konnte durch die gezielte Kombination von spezifischen Antikörpern realisiert werden. Hierfür wurden neben Kreuzreaktivitätstests auf dem Mikroarray parallel erfolgreich Western Blot Versuche durchgeführt. Jene Antikörper, welche in diesen Versuchen die gesetzten Anforderungen erfüllten, wurden für das Ermitteln der kleinsten nachweisbaren Konzentration verwendet. Über das Optimieren der Versuchsbedingungen konnte unter Verwendung von Tween in der Waschlösung sowohl auf Glas als auch auf Kunststoff die Hintergrundfluoreszenz reduziert und somit eine Steigerung des Signal/Hintergrundverhältnisses erreicht werden. In den Versuchen zu Ermittlung der Bestimmungsgrenze wurde für das humane Choriongonadotropin (hCG-i) eine Konzentration von 10 mU/ml, für dessen beta-Untereinheit (hCG-beta) eine Konzentration von 3,6 mU/ml und für das luteinisierende Hormon (LH) eine Konzentration von 10 mU/ml bestimmt. Den ermittelten Wert im Serum für das hCG-i entspricht dem von der Welt-Anti-Dopin-Agentur (WADA) geforderten Wert in Urin von 5 mU/ml. Neben der Ermittlung von Bestimmungsgrenzen wurden diese hinsichtlich auftretender Matrixeffekte in Serum und Blut gemessen. Wie aus den Versuchen zur Ermittlung von Kreuzreaktivitäten auf dem Mikroarray zu entnehmen ist, lassen sich das LH, das hCG-i und hCG-β ebenfalls in Serum und Blut messen. Die Durchführung einer Performance-Analyse über einem Einfach-Blind-Ansatz mit 130 Serum Proben, wurde ebenfalls über dieses System realisiert. Die ausgewerteten Proben wurden anschließend über eine Grenzwertoptimierungskurve analysiert und die diagnostische Spezifität ermittelt. Für die Messungen des LH konnte eine Sensitivität und Spezifität von 100% erreicht werden. Demnach wurden alle negativen und positiven Proben eindeutig interpretiert. Für das hCG-β konnte ebenfalls eine Spezifität von 100% und eine Sensitivität von 97% erreicht werden. Die hCG-i Proben wurden mit einer Spezifität von 100% und eine Sensitivität von 97,5% gemessen. Um den Nachweis zu erbringen, dass dieser Versuchsaufbau über mehrere Wochen stabile Signale bei Vermessen von identischen Proben liefert, wurde ein über zwölf Wochen angesetzter Stabilitätstest für alle Parameter erfolgreich in Serum und Blut durchgeführt. Zusammenfassend konnte in dieser Arbeit erfolgreich eine Mehrkomponentenanalyse als Multiplex Ansatz auf einem Mikroarray entwickelt werden. Die Durchführung der Performance-Analyse und des Stabilitätstests zeigen bereits die mögliche Einsatzfähigkeit dieses Tests im Kontext einer Dopinganalyse.
Graphitic carbon nitride, g-C₃N₄, is a promising organic photo-catalyst for a variety of redox reactions. In order to improve its efficiency in a systematic manner, however, a fundamental understanding of the microscopic interaction between catalyst, reactants and products is crucial. Here we present a systematic study of water adsorption on g-C₃N₄ by means of density functional theory and the density functional based tight-binding method as a prerequisite for understanding photocatalytic water splitting. We then analyze this prototypical redox reaction on the basis of a thermodynamic model providing an estimate of the overpotential for both water oxidation and H⁺ reduction. While the latter is found to occur readily upon irradiation with visible light, we derive a prohibitive overpotential of 1.56 eV for the water oxidation half reaction, comparing well with the experimental finding that in contrast to H₂ production O₂ evolution is only possible in the presence of oxidation cocatalysts.
Portal Wissen = Zeit
(2014)
„Was ist also 'Zeit'?“ seufzt Augustinus von Hippo im 11. Buch seiner „Confessiones“ melancholisch, und fährt fort „Wenn mich niemand danach fragt, weiß ich es; will ich einem Fragenden es erklären, weiß ich es nicht.“ Auch heute, 1584 Jahre nach Augustinus, erscheint 'Zeit' immer noch rätselhaft. Abhandlungen über das Wesen der Zeit füllen Bibliotheken. Oder eben dieses Heft.
Wesensfragen sind den modernen Wissenschaften allerdings fremd. Zeit ist – zumindest in der Physik – unproblematisch. „Time is defined so that Motion looks simple“ erkärt man kurz und trocken, und verabschiedet sich damit vom Augustinischen Rätsel oder der Newtonschen Vorstellung einer absoluten Zeit, deren mathematischen Fluss man durch irdische Instrumente eh immer nur näherungsweise erfassen kann.
In der Alltagssprache, selbst in den Wissenschaften, reden wir zwar weiterhin vom Fluss der Zeit, aber Zeit ist schon lange keine natürliche Gegebenheit mehr. Zeit ist vielmehr ein konventioneller Ordnungsparameter für Änderung und Bewegung. Geordnet werden Prozesse, indem eine Klasse von Prozessen als Zählsystem dient, um andere Prozesse mit ihnen zu vergleichen und anhand der temporären Kategorien „vorher“, „während“ und „nachher“ anzuordnen.
Zu Galileis Zeiten galt der eigene Pulsschlag als Zeitstandard für den Flug von Kanonenkugeln. Mit zunehmender Verfeinerung der Untersuchungsmethoden erschien das zu unpraktisch: Die Weg-Zeit-Diagramme frei fliegender Kanonenkugeln erweisen sich in diesem Standard ziemlich verwackelt, schlecht reproduzierbar, und keineswegs „simpel“. Heutzutage greift man zu Cäsium-Atomen. Demnach dauert ein Prozess eine Sekunde, wenn ein 133Cs-Atom genau 9 192 631 770 Schwingungen zwischen zwei sogenannten Hyperfeinzuständen des Grundzustands vollführt hat. Und ein Meter ist die Entfernung, die Licht im Vakuum in exakt 1/299 792 458 Sekunden zurücklegt. Glücklicherweise sind diese Daten im General Positioning System GPS hart kodiert, so dass der Nutzer sie nicht jedes Mal aufs Neue eingeben muss, wenn er wissen will, wo er ist. Aber schon morgen muss er sich vielleicht ein Applet runterladen, weil der Zeitstandard durch raffinierte Übergänge in Ytterbium ersetzt wurde.
Der konventionelle Charakter des Zeitbegriffs sollte nicht dazu verführen zu glauben, alles sei irgendwie relativ und daher willkürlich. Die Beziehung eines Pulsschlags zu einer Atomuhr ist absolut, und genauso real, wie die Beziehung einer Sanduhr zum Lauf der Sonne. Die exakten Wissenschaften sind Beziehungswissenschaften. Sie handeln nicht vom Ding an sich, was Newton und Kant noch geträumt haben, sondern von Beziehungen – worauf schon Leibniz und später Mach hingewiesen haben.
Kein Wunder, dass sich für andere Wissenschaften der physikalische Zeit-Standard als ziemlich unpraktisch erweist. Der Psychologie der Zeitwahrnehmung entnehmen wir – und jeder wird das bestätigen können – dass das gefühlte Alter durchaus verschieden ist vom physikalischen Alter. Je älter man ist, desto kürzer erscheinen einem die Jahre.
Unter der einfachen Annahme, dass die gefühlte Dauer umgekehrt proportional zum physikalischen Alter ist, und man als Zwanzigjähriger ein physikalisches Jahr auch psychologisch als ein Jahr empfindet, ergibt sich der erstaunliche Befund, dass man mit 90 Jahren 90 Jahre ist. Und – bei einer angenommenen Lebenserwartung von 90 Jahren – mit 20 (bzw. 40) physikalischen Jahren bereits 67 (bzw. 82) Prozent seiner gefühlten Lebenszeit hinter sich hat.
Bevor man angesichts der „Relativität von Zeit“ selbst in Melancholie versinkt, vielleicht die Fortsetzung des Eingangszitats von Augustinus: „Aber zuversichtlich behaupte ich zu wissen, dass es vergangene Zeit nicht gäbe, wenn nichts verginge, und nicht künftige Zeit, wenn nichts herankäme, und nicht gegenwärtige Zeit wenn nichts seiend wäre.“ Tja – oder mit Bob Dylan „The times they're a changing“.
Ich wünsche Ihnen eine spannende Zeit bei der Lektüre dieser Ausgabe.
Prof. Dr. Martin Wilkens
Professor für Quantenoptik
Portal Wissen = Time
(2014)
“What then is time?”, Augustine of Hippo sighs melancholically in Book XI of “Confessions” and continues, “If no one asks me, I know; if I want to explain it to a questioner, I don’t know.” Even today, 1584 years after Augustine, time still appears mysterious. Treatises about the essence of time fill whole libraries – and this magazine.
However, questions of essence are alien to modern sciences. Time is – at least in physics – unproblematic: “Time is defined so that motion looks simple”, briefly and prosaically phrased, waves goodbye to Augustine’s riddle and to the Newtonian concept of absolute time, whose mathematical flow can only be approximately recorded with earthly instruments anyway.
In our everyday language and even in science we still speak of the flow of time but time has not been a natural condition for quite a while now. It is rather a conventional order parameter for change and movement. Processes are arranged by using a class of processes as a counting system in order to compare other processes and to organize them with the help of the temporary categories “before”, “during”, and “after”.
During Galileo’s time one’s own pulse was seen as the time standard for the flight of cannon balls. More sophisticated examination methods later made this seem too impractical. The distance-time diagrams of free-flying cannon balls turned out to be rather imprecise, difficult to replicate, and in no way “simple”. Nowadays, we use cesium atoms. A process is said to take one second when a caesium-133 atom completes 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state. A meter is the length of the path travelled by light in a vacuum in exactly 1/299,792,458 of a second. Fortunately, these data are hard-coded in the Global Positioning System GPS so users do not have to reenter them each time they want to know where they are. In the future, however, they might have to download an app because the time standard has been replaced by sophisticated transitions to ytterbium.
The conventional character of the time concept should not tempt us to believe that everything is somehow relative and, as a result, arbitrary. The relation of one’s own pulse to an atomic clock is absolute and as real as the relation of an hourglass to the path of the sun. The exact sciences are relational sciences. They are not about the thing-initself as Newton and Kant dreamt, but rather about relations as Leibniz and, later, Mach pointed out.
It is not surprising that the physical time standard turned out to be rather impractical for other scientists. The psychology of time perception tells us – and you will all agree – that the perceived age is quite different from the physical age. The older we get the shorter the years seem. If we simply assume that perceived duration is inversely related to physical age and that a 20-year old also perceives a physical year as a psychological one, we come to the surprising discovery that at 90 years we are 90 years old. With an assumed life expectancy of 90 years, 67% (or 82%) of your felt lifetime is behind you at the age of 20 (or 40) physical years.
Before we start to wallow in melancholy in the face of the “relativity of time”, let me again quote Augustine. “But at any rate this much I dare affirm I know: that if nothing passed there would be no past time; if nothing were approaching, there would be no future time; if nothing were, there would be no present time.” Well, – or as Bob Dylan sings “The times they are a-changin”.
I wish you an exciting time reading this issue.
Prof. Martin Wilkens
Professor of Quantum Optics
Ausprägungen räumlicher Identität in ehemaligen sudetendeutschen Gebieten der Tschechischen Republik
(2014)
Das tschechische Grenzgebiet ist eine der Regionen in Europa, die in der Folge des Zweiten Weltkrieges am gravierendsten von Umbrüchen in der zuvor bestehenden Bevölkerungsstruktur betroffen waren. Der erzwungenen Aussiedlung eines Großteils der ansässigen Bevölkerung folgten die Neubesiedlung durch verschiedenste Zuwanderergruppen sowie teilweise langanhaltende Fluktuationen der Einwohnerschaft. Die Stabilisierung der Bevölkerung stand sodann unter dem Zeichen der sozialistischen Gesellschafts- und Wirtschaftsordnung, die die Lebensweise und Raumwahrnehmung der neuen Einwohner nachhaltig prägte. Die Grenzöffnung von 1989, die politische Transformation sowie die Integration der Tschechischen Republik in die Europäische Union brachten neue demographische und sozioökonomische Entwicklungen mit sich. Sie schufen aber auch die Bedingungen dafür, sich neu und offen auch mit der spezifischen Geschichte des ehemaligen Sudetenlandes sowie mit dem Zustand der gegenwärtigen Gesellschaft in diesem Gebiet auseinanderzusetzen.
Im Rahmen der vorliegenden Arbeit wird anhand zweier Beispielregionen untersucht, welche Raumvorstellungen und Raumbindungen bei der heute in den ehemaligen sudetendeutschen Gebieten ansässigen Bevölkerung vorhanden sind und welche Einflüsse die unterschiedlichen raumstrukturellen Bedingungen darauf ausüben. Besonderes Augenmerk wird auf die soziale Komponente der Ausprägung räumlicher Identität gelegt, das heißt auf die Rolle von Bedeutungszuweisungen gegenüber Raumelementen im Rahmen sozialer Kommunikation und Interaktion. Dies erscheint von besonderer Relevanz in einem Raum, der sich durch eine gewisse Heterogenität seiner Einwohnerschaft hinsichtlich ihres ethnischen, kulturellen beziehungsweise biographischen Hintergrundes auszeichnet. Schließlich wird ermittelt, welche Impulse unter Umständen von einer ausgeprägten räumlichen Identität für die Entwicklung des Raumes ausgehen.
Rezensiertes Werk:
George, Rosemary Marangoly, Indian English and the Fiction of National Literature - Cambridge: Cambridge University Press, 2013. - Hb. viii, 285 pp. - (Zeitschrift für Anglistik und Amerikanistik ; 62(4)) ISBN 978-1-107-04000-7.
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
New porous materials based on covalently connected monomers are presented. The key step of the synthesis is an acetalisation reaction. In previous years we used acetalisation reactions extensively to build up various molecular rods. Based on this approach, investigations towards porous polymeric materials were conducted by us. Here we wish to present the results of these studies in the synthesis of 1D polyacetals and porous 3D polyacetals. By scrambling experiments with 1D acetals we could prove that exchange reactions occur between different building blocks (evidenced by MALDI-TOF mass spectrometry). Based on these results we synthesized porous 3D polyacetals under the same mild conditions.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Background Transcatheter aortic-valve implantation (TAVI) is an established alternative therapy in patients with severe aortic stenosis and a high surgical risk. Despite a rapid growth in its use, very few data exist about the efficacy of cardiac rehabilitation (CR) in these patients. We assessed the hypothesis that patients after TAVI benefit from CR, compared to patients after surgical aortic-valve replacement (sAVR).
Methods From September 2009 to August 2011, 442 consecutive patients after TAVI (n=76) or sAVR (n=366) were referred to a 3-week CR. Data regarding patient characteristics as well as changes of functional (6-min walk test. 6-MWT), bicycle exercise test), and emotional status (Hospital Anxiety and Depression Scale) were retrospectively evaluated and compared between groups after propensity score adjustment.
Results Patients after TAVI were significantly older (p<0.001), more female (p<0.001), and had more often coronary artery disease (p=0.027), renal failure (p=0.012) and a pacemaker (p=0.032). During CR, distance in 6-MWT (both groups p0.001) and exercise capacity (sAVR p0.001, TAVI p0.05) significantly increased in both groups. Only patients after sAVR demonstrated a significant reduction in anxiety and depression (p0.001). After propensity scores adjustment, changes were not significantly different between sAVR and TAVI, with the exception of 6-MWT (p=0.004).
Conclusions Patients after TAVI benefit from cardiac rehabilitation despite their older age and comorbidities. CR is a helpful tool to maintain independency for daily life activities and participation in socio-cultural life.
Background: Chronic kidney disease (CKD) is a frequent comorbidity among elderly patients and those with cardiovascular disease. CKD carries prognostic relevance. We aimed to describe patient characteristics, risk factor management and control status of patients in cardiac rehabilitation (CR), differentiated by presence or absence of CKD.
Design and methods: Data from 92,071 inpatients with adequate information to calculate glomerular filtration rate (GFR) based on the Cockcroft-Gault formula were analyzed at the beginning and the end of a 3-week CR stay. CKD was defined as estimated GFR <60 ml/min/1.73 m(2).
Results: Compared with non-CKD patients, CKD patients were significantly older (72.0 versus 58.0 years) and more often had diabetes mellitus, arterial hypertension, and atherothrombotic manifestations (previous stroke, peripheral arterial disease), but fewer were current or previous smokers had a CHD family history. Exercise capacity was much lower in CKD (59 vs. 92Watts). Fewer patients with CKD were treated with percutaneous coronary intervention (PCI), but more had coronary artery bypass graft (CABG) surgery. Patients with CKD compared with non-CKD less frequently received statins, acetylsalicylic acid (ASA), clopidogrel, beta blockers, and angiotensin converting enzyme (ACE) inhibitors, and more frequently received angiotensin receptor blockers, insulin and oral anticoagulants. In CKD, mean low density lipoprotein cholesterol (LDL-C), total cholesterol, and high density lipoprotein cholesterol (HDL-C) were slightly higher at baseline, while triglycerides were substantially lower. This lipid pattern did not change at the discharge visit, but overall control rates for all described parameters (with the exception of HDL-C) were improved substantially. At discharge, systolic blood pressure (BP) was higher in CKD (124 versus 121 mmHg) and diastolic BP was lower (72 versus 74 mmHg). At discharge, 68.7% of CKD versus 71.9% of non-CKD patients had LDL-C <100 mg/dl. Physical fitness on exercise testing improved substantially in both groups. When the Modification of Diet in Renal Disease (MDRD) formula was used for CKD classification, there was no clinically relevant change in these results.
Conclusion: Within a short period of 3-4 weeks, CR led to substantial improvements in key risk factors such as lipid profile, blood pressure, and physical fitness for all patients, even if CKD was present.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
Moderne Kraftfahrzeuge verfügen über eine Vielzahl an Sensoren, welche für einen reibungslosen technischen Betrieb benötigt werden. Hierzu zählen neben fahrzeugspezifischen Sensoren (wie z.B. Motordrehzahl und Fahrzeuggeschwindigkeit) auch umweltspezifische Sensoren (wie z.B. Luftdruck und Umgebungstemperatur). Durch die zunehmende technische Vernetzung wird es möglich, diese Daten der Kraftfahrzeugelektronik aus dem Fahrzeug heraus für die verschiedensten Zwecke zu verwenden.
Die vorliegende Arbeit soll einen Beitrag dazu leisten, diese neue Art an massenhaften Daten im Sinne des Konzepts der „Extended Floating Car Data“ (XFCD) als Geoinformationen nutzbar zu machen und diese für raumzeitliche Visualisierungen (zur visuellen Analyse) anwenden zu können. In diesem Zusammenhang wird speziell die Perspektive des Umwelt- und Verkehrsmonitoring betrachtet, wobei die Anforderungen und Potentiale mit Hilfe von Experteninterviews untersucht werden. Es stellt sich die Frage, welche Daten durch die Kraftfahrzeugelektronik geliefert und wie diese möglichst automatisiert erfasst, verarbeitet, visualisiert und öffentlich bereitgestellt werden können. Neben theoretischen und technischen Grundlagen zur Datenerfassung und -nutzung liegt der Fokus auf den Methoden der kartographischen Visualisierung. Dabei soll der Frage nachgegangenen werden, ob eine technische Implementierung ausschließlich unter Verwendung von Open Source Software möglich ist. Das Ziel der Arbeit bildet ein zweigliedriger Ansatz, welcher zum einen die Visualisierung für ein exemplarisch gewähltes Anwendungsszenario und zum anderen die prototypische Implementierung von der Datenerfassung im Fahrzeug unter Verwendung der gesetzlich vorgeschriebenen „On Board Diagnose“-Schnittstelle und einem Smartphone-gestützten Ablauf bis zur webbasierten Visualisierung umfasst.
Dieser Artikel adressiert zwei bisher nur wenig untersuchte Aspekte der Führungsforschung: Führungsverhalten im öffentlichen Sektor und Faktoren die Führungsverhalten beeinflussen. Mittels einer Fallstudie in der Bundesagentur für Arbeit werden explorativ Hypothesen über Einflussfaktoren des Führungsverhaltens aufgestellt. Die Studie kommt zu der Erkenntnis, dass eine oftmals angenommene Führungslücke im öffentlichen Sektor nicht bestätigt werden kann. Für das ausgeprägte Führungsverhalten, das in der Fallstudie beobachtet wurde, wird als Determinante die besondere Ausgestaltung des Managementsystems der Bundesagentur für Arbeit verantwortlich gemacht. Dazu gehört unter anderem das Performance Management System sowie die Führungskräfteauswahl und -entwicklung. Die Arbeit schließt mit Empfehlungen für weitere Forschungsansätze auf dem Gebiet der Führungsforschung im öffentlichen Sektor.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
This study aims to further mechanistically understand toxic modes of action after chronic inorganic arsenic exposure. Therefore long-term incubation studies in cultured cells were carried out, to display chronically attained changes, which cannot be observed in the generally applied in vitro short-term incubation studies. Particularly, the cytotoxic, genotoxic and epigenetic effects of an up to 21 days incubation of human urothelial (UROtsa) cells with pico- to nanomolar concentrations of iAsIII and its metabolite thio-DMAV were compared. After 21 days of incubation, cytotoxic effects were strongly enhanced in the case of iAsIII and might partly be due to glutathione depletion and genotoxic effects on the chromosomal level. These results are in strong contrast to cells exposed to thio-DMAV. Thus, cells seemed to be able to adapt to this arsenical, as indicated among others by an increase in the cellular glutathione level. Most interestingly, picomolar concentrations of both iAsIII and thio-DMAV caused global DNA hypomethylation in UROtsa cells, which was quantified in parallel by 5-medC immunostaining and a newly established, reliable, high resolution mass spectrometry (HRMS)-based test system. This is the first time that epigenetic effects are reported for thio-DMAV; iAsIII induced epigenetic effects occur in at least 8000 fold lower concentrations as reported in vitro before. The fact that both arsenicals cause DNA hypomethylation at really low, exposure-relevant concentrations in human urothelial cells suggests that this epigenetic effect might contribute to inorganic arsenic induced carcinogenicity, which for sure has to be further investigated in future studies.
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
Bolivia is one of the poorest countries in Latin America. This study analyzes whether rural poverty increases the incidence of food insecurity and whether food insecurity perpetuates the condition of poverty among the rural poor in Bolivia. In order to achieve this aim, the risks that households face and the capacity of households to implement coping strategies in order to mitigate vulnerability shocks are identified. We suggest that efforts by households to become food secure may be difficult in rural areas because of poverty and the vulnerability associated with a lack of physical assets, low levels of human capital, poor infrastructure, and poor health; as well as the precarious regional environment aggravating the severity of vulnerability to food insecurity.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
The paper is devoted to asymptotic analysis of the Dirichlet problem for a second order partial differential equation containing a small parameter multiplying the highest order derivatives. It corresponds to a small perturbation of a dynamical system having a stationary solution in the domain. We focus on the case where the trajectories of the system go into the domain and the stationary solution is a proper node.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
Organische Halbleiter besitzen neue, bemerkenswerte Materialeigenschaften, die sie für die grundlegende Forschung wie auch aktuelle technologische Entwicklung (bsw. org. Leuchtdioden, org. Solarzellen) interessant werden lassen. Aufgrund der starken konformative Freiheit der konjugierten Polymerketten führt die Vielzahl der möglichen Anordnungen und die schwache intermolekulare Wechselwirkung für gewöhnlich zu geringer struktureller Ordnung im Festkörper. Die Morphologie hat gleichzeitig direkten Einfluss auf die elektronische Struktur der organischen Halbleiter, welches sich meistens in einer deutlichen Reduktion der Ladungsträgerbeweglichkeit gegenüber den anorganischen Verwandten zeigt. So stellt die Beweglichkeit der Ladungen im Halbleiter einen der limitierenden Faktoren für die Leistungsfähigkeit bzw. den Wirkungsgrad von funktionellen organischen Bauteilen dar. Im Jahr 2009 wurde ein neues auf Naphthalindiimid und Bithiophen basierendes Dornor/Akzeptor Copolymer vorgestellt [P(NDI2OD‑T2)], welches sich durch seine außergewöhnlich hohe Ladungsträgermobilität auszeichnet. In dieser Arbeit wird die Ladungsträgermobilität in P(NDI2OD‑T2) bestimmt, und der Transport durch eine geringe energetischer Unordnung charakterisiert. Obwohl dieses Material zunächst als amorph beschrieben wurde zeigt eine detaillierte Analyse der optischen Eigenschaften von P(NDI2OD‑T2), dass bereits in Lösung geordnete Vorstufen supramolekularer Strukturen (Aggregate) existieren. Quantenchemische Berechnungen belegen die beobachteten spektralen Änderungen. Mithilfe der NMR-Spektroskopie kann die Bildung der Aggregate unabhängig von optischer Spektroskopie bestätigt werden. Die Analytische Ultrazentrifugation an P(NDI2OD‑T2) Lösungen legt nahe, dass sich die Aggregation innerhalb der einzelnen Ketten unter Reduktion des hydrodynamischen Radius vollzieht. Die Ausbildung supramolekularen Strukturen nimmt auch eine signifikante Rolle bei der Filmbildung ein und verhindert gleichzeitig die Herstellung amorpher P(NDI2OD‑T2) Filme. Durch chemische Modifikation der P(NDI2OD‑T2)-Kette und verschiedener Prozessierungs-Methoden wurde eine Änderung des Kristallinitätsgrades und gleichzeitig der Orientierung der kristallinen Domänen erreicht und mittels Röntgenbeugung quantifiziert. In hochauflösenden Elektronenmikroskopie-Messungen werden die Netzebenen und deren Einbettung in die semikristallinen Strukturen direkt abgebildet. Aus der Kombination der verschiedenen Methoden erschließt sich ein Gesamtbild der Nah- und Fernordnung in P(NDI2OD‑T2). Über die Messung der Elektronenmobilität dieser Schichten wird die Anisotropie des Ladungstransports in den kristallographischen Raumrichtungen von P(NDI2OD‑T2) charakterisiert und die Bedeutung der intramolekularen Wechselwirkung für effizienten Ladungstransport herausgearbeitet. Gleichzeitig wird deutlich, wie die Verwendung von größeren und planaren funktionellen Gruppen zu höheren Ladungsträgermobilitäten führt, welche im Vergleich zu klassischen semikristallinen Polymeren weniger sensitiv auf die strukturelle Unordnung im Film sind.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.
Diese Arbeit befasst sich mit den sogenannten relativähnlichen Sätzen im Frühneuhochdeutschen und leistet somit einen Beitrag zur Subordinationsforschung des älteren Deutsch. Relativähnliche Sätze sind formal durch ein satzinitiales anaphorisches d-Element und die Endstellung des finiten Verbs gekennzeichnet. Semantisch gesehen beziehen sie sich auf den vorangehenden Satz als Ganzes, indem sie ihn in bestimmter Weise weiterführen oder kommentieren. In der bisherigen Forschung werden diese Sätze satztypologisch als Hauptsätze mit Verbendstellung analysiert (vgl. dazu Maurer 1926, Behaghel 1932 und Lötscher 2000). Nach der ausführlichen Diskussion der formalen Abhängigkeitsmarker im älteren Deutsch sowie anhand einer umfangreichen korpusbasierten Untersuchung wird in dieser Arbeit gezeigt, dass relativähnliche Sätze im Frühneuhochdeutschen auch als abhängige Sätze - analog zu den weiterführenden Relativsätzen im Gegenwartsdeutschen - analysiert werden können. Die weiterführenden Relativsätze im Gegenwartsdeutschen enthalten satzinitial auch ein anaphorisches Element, das sich auf das Gesagte in dem vorangehenden Satz bezieht. Verbendstellung weisen sie ebenfalls auf (mehr zur Grammatik der weiterführenden Relativsätze vgl. insb. Brandt 1990 und Holler 2005). Über die Untersuchung relativähnlicher Sätze hinaus befasst sich diese Arbeit ausführlich mit formalen Abhängigkeitsmarkern des älteren Deutsch, wie Verbendstellung, Einleiter und afinite Konstruktion.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.