@phdthesis{D'Agata2014, author = {D'Agata, Valeria Costanza}, title = {A partire dalla Somaestetica di Shusterman}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-72650}, school = {Universit{\"a}t Potsdam}, pages = {280}, year = {2014}, abstract = {Con la sua proposta di una Somaestetica, articolata fondamentalmente in analitica, pragmatica e pratica, Richard Shusterman intende in primo luogo fornire e creare una cornice metodologica, un orientamento unitario che sia in grado di rintracciare, ricostruire e portare a manifestazione - all'interno di eterogenee riflessioni teoriche e pratiche somatiche - la comune esigenza di ridare luce alla dimensione corporea come modo primario di essere nel mondo. Recuperando l'accezione baumgarteniana di Aesthetica come gnoseologia inferiore, arte dell'analogo della ragione, scienza della conoscenza sensibile, la somaestetica intende dare nuovo impulso alla pi{\`u} profonda radice di estetica e filosofia che coglie la vita nel suo processo di metamorfosi e rigenerazione continua, in quel respiro vitale che, per quanto possa diventare cosciente, non {\`e} mai totalmente afferrabile dalla ragione discorsiva, situandosi piuttosto in quello spazio primordiale in cui coscienza e corpo si coappartengono, in cui il soggetto non {\`e} ancora individualizzabile perch{\´e} fuso con l'ambiente, non {\`e} totalmente privatizzabile perch{\´e} intrinsecamente plasmato dal tessuto sociale cui egli stesso conferisce dinamicamente forma. A partire dunque dalla rivalutazione del concetto di Aisthesis la disciplina somaestetica mira ad una intensificazione di sensorialit{\`a}, percezione, emozione, commozione, rintracciando proprio nel Soma la fonte di quelle facolt{\`a} "inferiori" irriducibili a quelle puramente intellettuali, che permettono di accedere alle dimensioni qualitative dell'esperienza, di portare a manifestazione e far maturare l'essere umano come essere indivisibile che non si lascia incontrare da un pensiero che ne rinnega l'unitariet{\`a} in nome di fittizie e laceranti distinzioni dicotomiche. Nel corpo infatti si radicano in modo silente regole, convenzioni, norme e valori socioculturali che determinano e talvolta limitano la configurazione ed espressione di sensazioni, percezioni, cognizioni, pensieri, azioni, volizioni, disposizioni di un soggetto da sempre inserito in una Mitwelt (mondo comune), ed {\`e} allora proprio al corpo che bisogna rivolgersi per riconfigurare pi{\`u} autentiche modalit{\`a} di espressione del soggetto che crea equilibri dinamici per mantenere una relazione di coerenza con il pi{\`u} ampio contesto sociale, culturale, ambientale. L'apertura al confronto con eterogenee posizioni filosofiche e l'intrinseca multidisciplinariet{\`a} spiegano la centralit{\`a} nel contemporaneo dibattito estetologico internazionale della Somaestetica che, rivolgendosi tanto ad una formulazione teorica quanto ad una concreta applicazione pratica, intende rivalutare il soma come corporeit{\`a} intelligente, senziente, intenzionale e attiva, non riducibile all'accezione peccaminosa di caro (mero corpo fisico privo di vita e sensazione). Attraverso la riflessione e la pratica di tecniche di coscienza somatica si portano in primo piano i modi in cui il sempre pi{\`u} consapevole rapporto con la propria corporeit{\`a} come mediatamente esperita e immediatamente vissuta, sentita, offre occasioni autentiche di realizzazione progressiva di s{\´e} innanzitutto come persone, capaci di autocoltivazione, di riflessione cosciente sulle proprie abitudini incorporate, di ristrutturazione creativa di s{\´e}, di intensificata percezione e apprezzamento sensoriale sia nel concreto agire quotidiano, sia nella dimensione pi{\`u} propriamente estetologica di ricezione, fruizione e creazione artistica. L'indirizzo essenzialmente pragmatista della riflessione di Shusterman traccia cos{\`i} una concezione fondamentalmente relazionale dell'estetica in grado di porsi proprio nel movimento e nel rapporto continuamente diveniente di vera e propria trasformazione e passaggio tra le dimensioni fisiche, proprio-corporee, psichiche e spirituali del soggetto la cui interazione, ed il cui reciproco riversarsi le une nelle altre, pu{\`o} risultare profondamente arricchito attraverso una progressiva e sempre crescente consapevolizzazione della ricchezza della dimensione corporea in quanto intenzionale, percettiva, senziente, volitiva, tanto quanto vulnerabile, limitante, caduca, patica. Il presente lavoro intende ripercorrere ed approfondire alcuni dei principali referenti di Shusterman, focalizzandosi prevalentemente sulla radice pragmatista della sua proposta e sul confronto con il dibattito di area tedesca tra estetica, antropologia filosofica, neofenomenologia e antropologia medica, per riguadagnare una nozione di soma che proprio a partire dal contrasto, dall'impatto irriducibile con la potenza annullante delle situazioni limite, della crisi possa acquisire un pi{\`u} complesso e ricco valore armonizzante delle intrinseche e molteplici dimensioni che costituiscono il tessuto della soggettivit{\`a} incarnata. In particolare il primo capitolo (1. Somaestetica) chiarisce le radici essenzialmente pragmatiste della proposta shustermaniana e mostra come sia possibile destrutturare e dunque riconfigurare radicati modi di esperienza, rendendo coscienti abitudini e modi di vivere che si fissano a livello somatico in modo per lo pi{\`u} inavvertito. Il confronto con la nozione di Habitus, di cui Pierre Bourdieu mette brillantemente in luce l'invisibile e socialmente determinata matrice somatica, lascia scorgere come ogni manifestazione umana sia sostenuta dall'incorporazione di norme, credenze, valori che determinano e talvolta limitano l'espressione, lo sviluppo, persino le predisposizioni e le inclinazioni degli individui. Ed {\`e} proprio intervenendo a questo livello che si pu{\`o} restituire libert{\`a} alle scelte e aprirsi cos{\`i} alle dimensioni essenzialmente qualitative dell'esperienza che, nell'accezione deweyana {\`e} un insieme olistico unitario e coeso che fa da sfondo alle relazioni organismo-ambiente, un intreccio inestricabile di teoria e prassi, particolare e universale, psiche e soma, ragione ed emozione, percettivo e concettuale, insomma quell'immediata conoscenza corporea che struttura lo sfondo di manifestazione della coscienza.}, language = {it} } @phdthesis{TabaresJimenez2021, author = {Tabares Jimenez, Ximena del Carmen}, title = {A palaeoecological approach to savanna dynamics and shrub encroachment in Namibia}, doi = {10.25932/publishup-49281}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-492815}, school = {Universit{\"a}t Potsdam}, pages = {121}, year = {2021}, abstract = {The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge. To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover? The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon. Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data. The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation. The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50\% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae. In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs. This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.}, language = {en} } @phdthesis{Gebauer2008, author = {Gebauer, Denis}, title = {A novel view on the early stage of crystallization}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-19818}, school = {Universit{\"a}t Potsdam}, year = {2008}, abstract = {This thesis provides a novel view on the early stage of crystallization utilizing calcium carbonate as a model system. Calcium carbonate is of great economical, scientific and ecological importance, because it is a major part of water hardness, the most abundant Biomineral and forms huge amounts of geological sediments thus binding large amounts of carbon dioxide. The primary experiments base on the evolution of supersaturation via slow addition of dilute calcium chloride solution into dilute carbonate buffer. The time-dependent measurement of the Ca2+ potential and concurrent pH = constant titration facilitate the calculation of the amount of calcium and carbonate ions bound in pre-nucleation stage clusters, which have never been detected experimentally so far, and in the new phase after nucleation, respectively. Analytical Ultracentrifugation independently proves the existence of pre-nucleation stage clusters, and shows that the clusters forming at pH = 9.00 have a proximately time-averaged size of altogether 70 calcium and carbonate ions. Both experiments show that pre-nucleation stage cluster formation can be described by means of equilibrium thermodynamics. Effectively, the cluster formation equilibrium is physico-chemically characterized by means of a multiple-binding equilibrium of calcium ions to a 'lattice' of carbonate ions. The evaluation gives GIBBS standard energy for the formation of calcium/carbonate ion pairs in clusters, which exhibits a maximal value of approximately 17.2 kJ mol^-1 at pH = 9.75 and relates to a minimal binding strength in clusters at this pH-value. Nucleated calcium carbonate particles are amorphous at first and subsequently become crystalline. At high binding strength in clusters, only calcite (the thermodynamically stable polymorph) is finally obtained, while with decreasing binding strength in clusters, vaterite (the thermodynamically least stable polymorph) and presumably aragonite (the thermodynamically intermediate stable polymorph) are obtained additionally. Concurrently, two different solubility products of nucleated amorphous calcium carbonate (ACC) are detected at low binding strength and high binding strength in clusters (ACC I 3.1EE-8 M^2, ACC II 3.8EE-8 M^2), respectively, indicating the precipitation of at least two different ACC species, while the clusters provide the precursor species of ACC. It is proximate that ACC I may relate to calcitic ACC -i.e. ACC exhibiting short range order similar to the long range order of calcite and that ACC II may relate to vateritic ACC, which will subsequently transform into the particular crystalline polymorph as discussed in the literature, respectively. Detailed analysis of nucleated particles forming at minimal binding strength in clusters (pH = 9.75) by means of SEM, TEM, WAXS and light microscopy shows that predominantly vaterite with traces of calcite forms. The crystalline particles of early stages are composed of nano-crystallites of approximately 5 to 10 nm size, respectively, which are aligned in high mutual order as in mesocrystals. The analyses of precipitation at pH = 9.75 in presence of additives -polyacrylic acid (pAA) as a model compound for scale inhibitors and peptides exhibiting calcium carbonate binding affinity as model compounds for crystal modifiers- shows that ACC I and ACC II are precipitated in parallel: pAA stabilizes ACC II particles against crystallization leading to their dissolution for the benefit of crystals that form from ACC I and exclusively calcite is finally obtained. Concurrently, the peptide additives analogously inhibit the formation of calcite and exclusively vaterite is finally obtained in case of one of the peptide additives. These findings show that classical nucleation theory is hardly applicable for the nucleation of calcium carbonate. The metastable system is stabilized remarkably due to cluster formation, while clusters forming by means of equilibrium thermodynamics are the nucleation relevant species and not ions. Most likely, the concept of cluster formation is a common phenomenon occurring during the precipitation of hardly soluble compounds as qualitatively shown for calcium oxalate and calcium phosphate. This finding is important for the fundamental understanding of crystallization and nucleation-inhibition and modification by additives with impact on materials of huge scientific and industrial importance as well as for better understanding of the mass transport in crystallization. It can provide a novel basis for simulation and modelling approaches. New mechanisms of scale formation in Bio- and Geomineralization and also in scale inhibition on the basis of the newly reported reaction channel need to be considered.}, language = {en} } @phdthesis{RudolphMohr2013, author = {Rudolph-Mohr, Nicole}, title = {A novel non-invasive optical method for quantitative visualization of pH and oxygen dynamics in soils}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-66993}, school = {Universit{\"a}t Potsdam}, year = {2013}, abstract = {In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.}, language = {en} } @phdthesis{Falter2016, author = {Falter, Daniela}, title = {A novel approach for large-scale flood risk assessments}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-90239}, school = {Universit{\"a}t Potsdam}, pages = {95}, year = {2016}, abstract = {In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way. The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of 'derived flood risk based on continuous simulations' is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series. The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen. To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale. Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of 'derived flood risk assessment based on continuous simulations'. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel-floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed. RFM and the 'derived flood risk approach based on continuous simulations' has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.}, language = {en} } @phdthesis{Ghasemzadeh2005, author = {Ghasemzadeh, Mohammad}, title = {A new algorithm for the quantified satisfiability problem, based on zero-suppressed binary decision diagrams and memoization}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-6378}, school = {Universit{\"a}t Potsdam}, year = {2005}, abstract = {Quantified Boolean formulas (QBFs) play an important role in theoretical computer science. QBF extends propositional logic in such a way that many advanced forms of reasoning can be easily formulated and evaluated. In this dissertation we present our ZQSAT, which is an algorithm for evaluating quantified Boolean formulas. ZQSAT is based on ZBDD: Zero-Suppressed Binary Decision Diagram , which is a variant of BDD, and an adopted version of the DPLL algorithm. It has been implemented in C using the CUDD: Colorado University Decision Diagram package. The capability of ZBDDs in storing sets of subsets efficiently enabled us to store the clauses of a QBF very compactly and let us to embed the notion of memoization to the DPLL algorithm. These points led us to implement the search algorithm in such a way that we could store and reuse the results of all previously solved subformulas with a little overheads. ZQSAT can solve some sets of standard QBF benchmark problems (known to be hard for DPLL based algorithms) faster than the best existing solvers. In addition to prenex-CNF, ZQSAT accepts prenex-NNF formulas. We show and prove how this capability can be exponentially beneficial.}, subject = {Bin{\"a}res Entscheidungsdiagramm}, language = {en} } @phdthesis{Thiede2019, author = {Thiede, Tobias}, title = {A multiscale analysis of additively manufactured lattice structures}, doi = {10.25932/publishup-47041}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-470418}, school = {Universit{\"a}t Potsdam}, pages = {xi, 97, LIII}, year = {2019}, abstract = {Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable. In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach. A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50\% of the powder porosity was released during production of the struts. Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters. The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise. The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.}, language = {en} } @phdthesis{Zass2021, author = {Zass, Alexander}, title = {A multifaceted study of marked Gibbs point processes}, doi = {10.25932/publishup-51277}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-512775}, school = {Universit{\"a}t Potsdam}, pages = {vii, 104}, year = {2021}, abstract = {This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation. We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight. We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution. Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation. Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.}, language = {en} } @phdthesis{Mauri2014, author = {Mauri, Marco}, title = {A model for sigma factor competition in bacterial cells}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-72098}, school = {Universit{\"a}t Potsdam}, pages = {167}, year = {2014}, abstract = {Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components. Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected. Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.}, language = {en} } @phdthesis{Gerling2022, author = {Gerling, Marten Tobias}, title = {A microfluidic system for high-precision image-based live cell sorting using dielectrophoretic forces}, doi = {10.25932/publishup-58742}, url = {http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-587421}, school = {Universit{\"a}t Potsdam}, pages = {vii, 87, VI}, year = {2022}, abstract = {An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods. Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality. In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly. With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95\% and about 85\% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system. The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.}, language = {en} }