Refine
Has Fulltext
- yes (6) (remove)
Document Type
- Doctoral Thesis (3)
- Postprint (2)
- Monograph/Edited Volume (1)
Is part of the Bibliography
- yes (6)
Keywords
- theory (6) (remove)
The simulation of broad-band (0.1 to 10 + Hz) ground-shaking over deep and spatially extended sedimentary basins at regional scales is challenging. We evaluate the ground-shaking of a potential M 6.5 earthquake in the southern Lower Rhine Embayment, one of the most important areas of earthquake recurrence north of the Alps, close to the city of Cologne in Germany. In a first step, information from geological investigations, seismic experiments and boreholes is combined for deriving a harmonized 3D velocity and attenuation model of the sedimentary layers. Three alternative approaches are then applied and compared to evaluate the impact of the sedimentary cover on ground-motion amplification. The first approach builds on existing response spectra ground-motion models whose amplification factors empirically take into account the influence of the sedimentary layers through a standard parameterization. In the second approach, site-specific 1D amplification functions are computed from the 3D basin model. Using a random vibration theory approach, we adjust the empirical response spectra predicted for soft rock conditions by local site amplification factors: amplifications and associated ground-motions are predicted both in the Fourier and in the response spectra domain. In the third approach, hybrid physics-based ground-motion simulations are used to predict time histories for soft rock conditions which are subsequently modified using the 1D site-specific amplification functions computed in method 2. For large distances and at short periods, the differences between the three approaches become less notable due to the significant attenuation of the sedimentary layers. At intermediate and long periods, generic empirical ground-motion models provide lower levels of amplification from sedimentary soils compared to methods taking into account site-specific 1D amplification functions. In the near-source region, hybrid physics-based ground-motions models illustrate the potentially large variability of ground-motion due to finite source effects.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
Der Sammelband umfasst die Beiträge des 10. Arbeitstreffens slavistischer Nachwuchswissenschaftler im Rahmen des Jungen Forums Slavistischer Literaturwissenschaft (JFSL), das vom 26. bis zum 28. März 2010 an der Universität Trier stattfand. Präsentiert wird ein Überblick über aktuelle Forschungsrichtungen und -themen der deutschsprachigen Slavistik, der trotz der weiter bestehenden Dominanz der Russistik eine zunehmende Tendenz zu Studien über verschiedene slavische Literaturen zeigt. Die Beiträge lassen sich in drei große Bereiche differenzieren: Der erste Teil ,Texturen' beinhaltet literaturwissenschaftliche Studien, die sich mit den textimannenten Effekten literarischer Werke auseinandersetzen. Der Text als Gewebe wird auf seine Fadendichte und -verkreuzung wie Entstehung und Tradierung bestimmter Motive und Topoi, Decodierung intertextueller Referenzen oder Allegorisierungs- und Symbolisierungprozesse hin analysiert. Der zweite Teil vereinigt unter dem Begriff ,Identitäten' Arbeiten aus dem Bereich der kulturwissenschaftlich orientierten Literaturwissenschaft, die mit Geschlechter-, Raum-, Erinnerungs- und postkolonialen Konzepten den Fragen der literarischen Identitätsgenese nachgehen. Untersucht werden ästhetische Umsetzungen von Machtdispositiven, Hierarchiebildungen und Ausschlussmechanismen. Die Beiträge des dritten Teils ,Theorien' reflektieren entweder die Literaturforschung und ihre Ästhetiktheorien oder unternehmen einen Theorieimport aus verschiedenen Disziplinen wie Philosophie, strukturalistische Psychoanalyse, Neuro-, Geschichts- oder Translationswissenschaften, die sich für die Analyse literarischer Texte als fruchtbar erweisen und damit das Literaturverständnis erweitern.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.