Filtern
Erscheinungsjahr
- 2007 (239) (entfernen)
Dokumenttyp
- Dissertation (239) (entfernen)
Gehört zur Bibliographie
- ja (239) (entfernen)
Schlagworte
- Arabidopsis thaliana (4)
- Synchronisation (3)
- Synchronization (3)
- Bedeutung (2)
- Klimawandel (2)
- Metabolisches Syndrom (2)
- Polymere (2)
- Raman (2)
- Spracherwerb (2)
- Transkriptionsfaktoren (2)
Institut
- Institut für Biochemie und Biologie (53)
- Institut für Physik und Astronomie (40)
- Wirtschaftswissenschaften (24)
- Institut für Chemie (17)
- Institut für Umweltwissenschaften und Geographie (13)
- Institut für Ernährungswissenschaft (11)
- Extern (10)
- Öffentliches Recht (10)
- Institut für Informatik und Computational Science (9)
- Department Linguistik (8)
Zur selektiven Entfernung von Schwermetallen aus industriellen Abwässern und Prozesslösungen der metallverarbeitenden Industrie werden synthetische metallkomplexierende funktionelle Polymere – mit Iminodiessigsäure (IDE) als aktive Spezies – seit Jahren erfolgreich zur Eliminierung störender Kationen eingesetzt. Ständig steigende Anforderungen an die Qualität der aufzubereitenden Wässer verlangen nach leistungsfähigen Selektivaustauschern, die den Erhalt der Eigenschaften von Prozesslösungen (z. B. pH-Wert, Salzgehalt) ermöglichen. Ziel der Untersuchungen war es, die strukturellen Matrixeinflüsse auf Beladung, Kapazität, Selektivität und Kinetik durch Variation der Matrix und der experimentellen Bedingungen näher zu untersuchen. Auf Basis einer monodispersen Erstsubstitution eines Styren-Divinylbenzen-Copolymerisates wurde durch gezielten Einbau funktioneller Gruppen – Synthese mit differenziertem Substitutionsgrad (TK/N 1-2) – versucht, systematisch den Einfluss des Substitutionsgrades der Matrix auf die Eigenschaften der Ionenaustauscher zu analysieren. Methodisch geordnet wurden zunächst die Versuche nach dem Batch- und anschließend nach dem Säulenverfahren durchgeführt und parallel dazu die Matrix charakterisiert. Das Verhalten der funktionellen Ankergruppen in Abhängigkeit vom pH-Wert der Lösung (pH-Bereich 2 - 5) wurde untersucht, der optimale Anreicherungs-pH-Wert, die maximale Beladung (Kapazität) und Selektivität der unterschiedlich substituierten Proben für die Schwermetall-Ionen Cu, Zn, Ni, Cd, Pb und Co ermittelt. Den statischen Versuchen folgten dynamische Untersuchungen im Säulenverfahren. Ziel war die Ermittlung des Durchbruchverhaltens und der Durchbruchkapazität bei optimalem pH-Wert in Abhängigkeit vom Substitutionsgrad gegenüber den Einzelmetallionen (Cu, Ni, Zn) und ausgewählten Paaren (Cu/Ni, Cu/Zn, Ni/Zn). Alle Ionenaustauscher wurden ausschließlich in der Ca-Form eingesetzt.
Aim The aim of the present study was to examine young female volleyballers’ body build, physical abilities, technical skills and psychophysiological properties in relation to their performance at competitions. The sample consisted of 46 female volleyballers aged 13-16 years. 49 basic anthropometric measurements were measured and 65 proportions and body composition characteristics were calculated. 9 physical ability tests, 9 volleyball technical skills tests and 21 psychophysiological tests were carried out. The game performance was recorded by the computer program Game. The program enabled to fix the performance of technical elements in case of each player. The computer program Game calculated the index of proficiency in case of each girl for each element. The first control group consisted of 74 female volleyballers aged 13–15 years with whom reduced anthropometry was provided and 28 games were recorded. The second control group consisted of 586 ordinary schoolgirls aged 13–16 years with whom full anthropometry was provided. Results In order to systematize all anthropometric characteristics, we first studied the essence of the anthropometric structure of the body as a whole. It turned out to be a characteristic system where all variables are in significant correlation between one another and where the leading characteristics are height and weight. Therefore we based the classification on the mean height and weight of the whole sample. We formed a 5 class SD classification. There are three classes of concordance between height and weight: small height – small weight, medium height – medium height, big height – big weight. The other two classes were classes of disconcordance between height and weight- pycnomorphs and leptomorphs. We managed to show that gradual increase in height and weight brought about statistically significant increase in length, breadth and depth measurements, circumferences, bone thicknesses and skinfolds. There were also systematic changes in indeces and body composition characteristics. Pycnomorphs and leptomorphs also showed differences specific to their body types in body measurements and body composition. The results of all tests were submitted to basic statistical analysis and all correlations were found between all the tests (volleyball technical skills, psychophysiological abilities, physical abilities), and all basic anthropometric variables (n = 49) and all proportions and body composition characteristics (n = 65). All anthropometric measurements and test results were correlated with the index of proficiency for all elements of the game. The best linear regression models were calculated for predicting proficiency in different elements of the game. We can see that body build and all kind of tests took part in predicting the proficiency of the game. The most essential for performing attack, block and feint were anthropometric and psychophysiological models. The studied complex of body build characteristics and tests results determine the players’ proficiency at competitions, are an important tool for testing the player’s individual development, enable to choose volleyballers from among schoolgirls and represent the whole body constitutional model of a young female volleyballer. Outlook Our outlook for the future is to continue recording of all Estonian championship games with the computer program Game, to continue the players’ anthropometric measuring and psychophysiological testing at competitions and to compile a national register for assessment of development of individual players and teams.
Numerous recent publications on the psychological meaning of “if” have proposed a probabilistic interpretation of conditional sentences. According to the proponents of probabilistic approaches, sentences like “If the weather is nice, I will be at the beach tomorrow” (or “If p, then q” in the abstract version) express a high probability of the consequent (being at the beach), given the antecedent (nice weather). When people evaluate conditional sentences, they assumingly do so by deriving the conditional probability P(q|p) using a procedure called the Ramsey test. This is a contradicting view to the hitherto dominant Mental Model Theory (MMT, Johnson-Laird, 1983), that proposes conditional sentences refer to possibilities in the world that are represented in form of mental models. Whereas probabilistic approaches gained a lot of momentum in explaining the interpretation of conditionals, there is still no conclusive probabilistic account of conditional reasoning. This thesis investigates the potential of a comprehensive probabilistic account on conditionals that covers the interpretation of conditionals as well as conclusion drawn from these conditionals when used as a premise in an inference task. The first empirical chapter of this thesis, Chapter 2, implements a further investigation of the interpretation of conditionals. A plain version of the Ramsey test as proposed by Evans and Over (2004) was tested against a similarity sensitive version of the Ramsey test (Oberauer, 2006) in two experiments using variants of the probabilistic truth table task (Experiments 2.1 and 2.2). When it comes to decide whether an instance is relevant for the evaluation of a conditional, similarity seems to play a minor role. Once the decision about relevance is made, believability judgments of the conditional seem to be unaffected by the similarity manipulation and judgments are based on frequency of instances, in the way predicted by the plain Ramsey test. In Chapter 3 contradicting predictions of the probabilistic approaches on conditional reasoning of Verschueren et al (2005), Evans and Over (2004) and Oaksford & Chater (2001) are tested against each other. Results from the probabilistic truth table task modified for inference tasks supports the account of Oaksford and Chater (Experiment 3.1). A learning version of the task and a design with every day conditionals yielded results unpredicted by any of the theories (Experiments 3.2-3.4). Based on these results, a new probabilistic 2-stage model of conditional reasoning is proposed. To preclude claims that the use of the probabilistic truth table task (or variants thereof) favors judgments reflecting conditional probabilities, Chapter 4 combines methodologies used by proponents of the MMT with the probabilistic truth table task. In three Experiments (4.1 -4.3) it could be shown for believability judgments of the conditional and inferences drawn from it, that causal information about counterexamples only prevails, when no frequencies of exceptional cases are present. Experiment 4.4 extends these findings to every day conditionals. A probabilistic estimation process based on frequency information is used to explain results on all tasks. The findings confirm with a probabilistic approach on conditionals and moreover constitute an explanatory challenge for the MMT. In conclusion of all the evidence gathered in this dissertation it seems justified to draw the picture of a comprehensive probabilistic view on conditionals quite optimistically. Probability estimates not only explain the believability people assign to a conditional sentence, they also explain to what extend people are willing to draw conclusions from those sentences.
Die vorliegende Arbeit beschäftigt sich mit der Synthese monodisperser, multifunktionaler Poly(amidoamine) (PAAs). Die Klasse der PAAs ist besonders interessant für eine Anwendung im Bereich der Biomedizin, da sie meist nicht toxisch ist, eine sehr geringe Immunogenizität zeigt und eine erhöhte Zellmembranpermeabilität besitzt. Allerdings ist der Einsatz linearer PAAs bisher limitiert, da ihre Synthese nur den Zugang von hoch-polydispersen Systemen mit einer streng alternierenden oder statistischen Verteilung von Funktionalitäten erlaubt. Es ist daher von großem Interesse diese Polymerklasse durch die Möglichkeit eines sequenzdefinierten Aufbaus und der Integration von neuen Funktionalitäten zu verbessern. Um dies zu ermöglichen, wurden, vergleichbar mit der etablierten Festphasensynthese von Peptiden, schrittweise funktionale Disäure- und Diamin-Bausteine an ein polymeres Träger-Harz addiert. Der sequenzielle Aufbau ermöglicht die Synthese monodisperser PAAs und die Kontrolle über die Monomersequenz. Die Wahl der Monomer-Bausteine und ihrer Funktionalitäten kann dabei für jede Addition neu getroffen werden und entscheidet so über die Sequenz der Funktionalitäten im Polymerrückgrat. Die verwendete Chemie entspricht dabei der Standardpeptidchemie, so dass mit Hilfe eines Peptidsynthese-Automaten die Synthese vollständig automatisiert werden konnte. Die Verwendung spezieller Trägerharze, die bereits mit einem synthetischen Polymerblock wie PEO oder auch mit einem Peptid vorbeladen waren, erlaubt die direkte Synthese von PEO- und Peptid-PAA Blockcopolymeren. Da die hier dargestellten PAAs später auf ihre Eignung als multivalente Polykationen in der Gentherapie getestet werden sollten, wurden zunächst Bausteine gewählt, die den Einbau verschiedener Aminfunktionalitäten ermöglichen. Die Bausteine müssen dabei so gewählt sein, dass sie kompatibel sind mit der Chemie des Peptidsynthesizers und eine quantitative Addition ohne Neben- oder Abbruchreaktionen garantieren. Darüber hinaus ist der Einbau von Peptidsequenzen und Disulfid-Einheiten in die PAA-Kette möglich, die z. B. für einen selektiven Abbau des Polymers im Organismus genutzt werden können. Zusammenfassend lässt sich feststellen, dass die in dieser Arbeit vorgestellten PAA-Systeme großes Potenzial als nicht-virale Vektoren für die Gentransfektion bieten. Sie sind nicht toxisch und zeigen Zellaufnahme-Effizienzen von bis zu 77%. Die Gentransfereffizienz ist im Vergleich zu etablierten Polymer-Vektoren zwar noch sehr gering, aber die bisherigen Versuche zeigen bereits eine mögliche Ursache, nämlich die schlechte Freisetzung des Genmaterials innerhalb der Zelle. Eine Lösung dieses Problems bietet jedoch die weitere Modifizierung der PAA-Systeme durch den Einbau von Sollbruchstellen. Diese Sollbruchstellen ermöglichen einen programmierten Abbau des Polymers innerhalb der Zelle und damit sollte die Freisetzung des Genmaterials vom Träger deutlich erleichtert werden. Mögliche Bruchstellen sind z. B. enzymatisch gezielt spaltbare Peptideinheiten oder Disulfid-Einheiten, wie sie bereits als Bausteine für die PAA-Synthese vorgestellt wurden (vergl. Kapitel 4.4). Da nur innerhalb der Zelle ein reduzierendes elektrochemisches Potential besteht, werden z. B. Disulfid-Einheiten auch nur dort gespalten und bieten außerhalb der Zelle ausreichende Stabilität zum Erhalt der Polyplexstruktur. Neben einer Anwendung in der Gentherapie bieten die hier vorgestellten PAA-Systeme den Vorteil einer systematischen Untersuchung von Struktur-Eigenschafts-Beziehungen der Polyplexe. Es wurden verschiedene Zusammenhänge zwischen der chemischen Struktur der PAA-Segmente und der Art und Stärke der DNS-Komprimierung aufgezeigt. Die Komprimierungsstärke wiederum zeigte deutlichen Einfluss auf die Internalisierungsrate und damit auch Transfektionseffizienz. Darüber hinaus zeigte sich ein drastischer Einfluss des PEO-Blocks auf die Stabilisierung der Polyplexe sowie deren intrazelluläre Freisetzung bei Zusatz von Chloroquin. Dennoch bleiben aufgrund der Komplexität der Zusammenhänge noch viele Mechanismen der Transfektion unverstanden, und es muss Aufgabe folgender Arbeiten sein, das Potential der hier eingeführten monodispersen PAA-Systeme weiter auszuloten. So wäre z. B. eine Korrelation der Kettenlänge mit den Parametern der Polyplexbildung, der Zellaufnahme und Transfektionseffizienz von großem Interesse. Darüber hinaus bietet der Einbau von Sollbruchstellen wie kurzen Peptidsequenzen oder den hier bereits eingeführten Disulfid-Einheiten neue Möglichkeiten der gezielten Freisetzung und des programmierten Abbaus, die näher untersucht werden müssen. Neben der Anwendung im Bereich der Gentransfektion sind außerdem andere Gebiete für den Einsatz von monodispersen multifunktionalen PAAs denkbar, da diese kontrollierbare und einstellbare Wechselwirkungen ermöglichen.
Nowadays, colloidal rods can be synthesized in large amounts. The rods are typically cylindrically and their length ranges from several nanometers to a few micrometers. In solution, systems of colloidal rodlike molecules or aggregates can form liquid-crystalline phases with long-range orientational and spatial order. In the present work, we investigate structure formation and fractionation in systems of rodlike colloids with the help of Monte Carlo simulations in the NPT ensemble. Repulsive interactions can successfully be mimicked by the hard rod model, which has been studied extensively in the past. In many cases, attractive interactions like van der Waals or depletion forces cannot be neglected, however. In the first part of this work, the phase behavior of monodisperse attractive rods is characterized for different interaction strengths. Phase diagrams as a function of rod length and pressure are presented. Most systems of synthesized mesoscopic rods have a polydisperse length distribution as a consequence of the longitudinal growth process of the rods. For many technical and research applications, a rather small polydispersity is desired in order to have well defined material properties. The polydispersity can be reduced by a spatial demixing (fractionation) of long and short rods. Fractionation and structure formation is studied in a tridisperse and a polydisperse bulk suspension of rods. We observe that the resulting structures depend distinctly on the interaction strength. The fractionation in the system is strongly enhanced with increasing interaction strength. Suspensions are typically confined in a container. We also examine the influence of adjacent substrates in systems of tridisperse and polydisperse rod suspensions. Three different substrate types are studied in detail: a planar wall, a corrugated substrate, and a substrate with rectangular cavities. We analyze the fluid structure close to the substrate and substrate controlled fractionation. The spatial arrangement of long and short rods in front of the substrate depends sensitively on the substrate structure and the pressure. Rods with a predefined length are segregated at substrates with rectangular cavities.
In this thesis the interplay between hydrodynamic transport and specific adhesion is theoretically investigated. An important biological motivation for this work is the rolling adhesion of white blood cells experimentally investigated in flow chambers. There, specific adhesion is mediated by weak bonds between complementary molecular building blocks which are either located on the cell surface (receptors) or attached to the bottom plate of the flow chamber (ligands). The model system under consideration is a hard sphere covered with receptors moving above a planar ligand-bearing wall. The motion of the sphere is influenced by a simple shear flow, deterministic forces, and Brownian motion. An algorithm is given that allows to numerically simulate this motion as well as the formation and rupture of bonds between receptors and ligands. The presented algorithm spatially resolves receptors and ligands. This opens up the perspective to apply the results also to flow chamber experiments done with patterned substrates based on modern nanotechnological developments. In the first part the influence of flow rate, as well as of the number and geometry of receptors and ligands, on the probability for initial binding is studied. This is done by determining the mean time that elapses until the first encounter between a receptor and a ligand occurs. It turns out that besides the number of receptors, especially the height by which the receptors are elevated above the surface of the sphere plays an important role. These findings are in good agreement with observations of actual biological systems like white blood cells or malaria-infected red blood cells. Then, the influence of bonds which have formed between receptors and ligands, but easily rupture in response to force, on the motion of the sphere is studied. It is demonstrated that different states of motion-for example rolling-can be distinguished. The appearance of these states depending on important model parameters is then systematically investigated. Furthermore, it is shown by which bond property the ability of cells to stably roll in a large range of applied flow rates is increased. Finally, the model is applied to another biological process, the transport of spherical cargo particles by molecular motors. In analogy to the so far described systems molecular motors can be considered as bonds that are able to actively move. In this part of the thesis the mean distance the cargo particles are transported is determined.
The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.
The Andean orogen is the most outstanding example of mountain building caused by the subduction of oceanic below continental lithosphere. The Andes formed by the subduction of the Nazca and Antarctic oceanic plates under the South American continent over at least ~200 million years. Tectonic and climatic conditions vary markedly along this north-south–oriented plate boundary, which thus represents an ideal natural laboratory to study tectonic and climatic segmentation processes and their possible feedbacks. Most of the seismic energy on Earth is released by earthquakes in subduction zones, like the giant 1960, Mw 9.5 event in south-central Chile. However, the segmentation mechanisms of surface deformation during and between these giant events have remained poorly understood. The Andean margin is a key area to study seismotectonic processes because of its along-strike variability under similar plate kinematic boundary conditions. Active deformation has been widely studied in the central part of the Andes, but the south-central sector of the orogen has gathered less research efforts. This study focuses on tectonics at the Neogene and late Quaternary time scales in the Main Cordillera and coastal forearc of the south-central Andes. For both domains I document the existence of previously unrecognized active faults and present estimates of deformation rates and fault kinematics. Furthermore these data are correlated to address fundamental mountain building processes like strain partitioning and large-scale segmentation. In the Main Cordillera domain and at the Neogene timescale, I integrate structural and stratigraphic field observations with published isotopic ages to propose four main phases of coupled styles of tectonics and distribution of volcanism and magmatism. These phases can be related to the geometry and kinematics of plate convergence. At the late Pleistocene timescale, I integrate field observations with lake seismic and bathymetric profiles from the Lago Laja region, located near the Andean drainage divide. These data reveal Holocene extensional faults, which define the Lago Laja fault system. This fault system has no significant strike-slip component, contrasting with the Liquiñe-Ofqui dextral intra-arc system to the south, where Holocene strike-slip markers are ubiquitous. This contrast in structural style along the arc is coincident with a marked change in along-strike fault geometries in the forearc, across the Arauco Peninsula. Thereon I propose that a net gradient in the degree of partitioning of oblique subduction occurs across the Arauco transition zone. To the north, the margin parallel component of oblique convergence is distributed in a wide zone of diffuse deformation, while to the south it is partitioned along an intra-arc, margin-parallel strike-slip fault zone. In the coastal forearc domain and at the Neogene timescale, I integrate structural and stratigraphic data from field observations, industry reflection-seismic profiles and boreholes to emphasize the influence of climate-driven filling of the trench on the mechanics and kinematics of the margin. I show that forearc basins in the 34-45°S segment record Eocene to early Pliocene extension and subsidence followed by ongoing uplift and contraction since the late Pliocene. I interpret the first stage as caused by tectonic erosion due to high plate convergence rates and reduced trench fill. The subsequent stage, in turn, is related to accretion caused by low convergence rates and the rapid increase in trench fill after the onset of Patagonian glaciations and climate-driven exhumation at ~6-5 Ma. On the late Quaternary timescale, I integrate off-shore seismic profiles with the distribution of deformed marine terraces from Isla Santa María, dated by the radiocarbon method, to show that inverted reverse faulting controls the coastal geomorphology and segmentation of surface deformation. There, a cluster of microearthquakes illuminates one of these reverse faults, which presumingly reaches the plate interface. Furthermore, I use accounts of coseismic uplift during the 1835 M>8 earthquake made by Charles Darwin, to propose that this active reverse fault has been mechanically coupled to the megathrust. This has important implications on the assessment of seismic hazards in this, and other similar regions. These results underscore the need to study plate-boundary deformation processes at various temporal and spatial scales and to integrate geomorphologic, structural, stratigraphic, and geophysical data sets in order to understand the present distribution and causes of tectonic segmentation.
Answer Set Programming (ASP) emerged in the late 1990s as a new logic programming paradigm, having its roots in nonmonotonic reasoning, deductive databases, and logic programming with negation as failure. The basic idea of ASP is to represent a computational problem as a logic program whose answer sets correspond to solutions, and then to use an answer set solver for finding answer sets of the program. ASP is particularly suited for solving NP-complete search problems. Among these, we find applications to product configuration, diagnosis, and graph-theoretical problems, e.g. finding Hamiltonian cycles. On different lines of ASP research, many extensions of the basic formalism have been proposed. The most intensively studied one is the modelling of preferences in ASP. They constitute a natural and effective way of selecting preferred solutions among a plethora of solutions for a problem. For example, preferences have been successfully used for timetabling, auctioning, and product configuration. In this thesis, we concentrate on preferences within answer set programming. Among several formalisms and semantics for preference handling in ASP, we concentrate on ordered logic programs with the underlying D-, W-, and B-semantics. In this setting, preferences are defined among rules of a logic program. They select preferred answer sets among (standard) answer sets of the underlying logic program. Up to now, those preferred answer sets have been computed either via a compilation method or by meta-interpretation. Hence, the question comes up, whether and how preferences can be integrated into an existing ASP solver. To solve this question, we develop an operational graph-based framework for the computation of answer sets of logic programs. Then, we integrate preferences into this operational approach. We empirically observe that our integrative approach performs in most cases better than the compilation method or meta-interpretation. Another research issue in ASP are optimization methods that remove redundancies, as also found in database query optimizers. For these purposes, the rather recently suggested notion of strong equivalence for ASP can be used. If a program is strongly equivalent to a subprogram of itself, then one can always use the subprogram instead of the original program, a technique which serves as an effective optimization method. Up to now, strong equivalence has not been considered for logic programs with preferences. In this thesis, we tackle this issue and generalize the notion of strong equivalence to ordered logic programs. We give necessary and sufficient conditions for the strong equivalence of two ordered logic programs. Furthermore, we provide program transformations for ordered logic programs and show in how far preferences can be simplified. Finally, we present two new applications for preferences within answer set programming. First, we define new procedures for group decision making, which we apply to the problem of scheduling a group meeting. As a second new application, we reconstruct a linguistic problem appearing in German dialects within ASP. Regarding linguistic studies, there is an ongoing debate about how unique the rule systems of language are in human cognition. The reconstruction of grammatical regularities with tools from computer science has consequences for this debate: if grammars can be modelled this way, then they share core properties with other non-linguistic rule systems.