Refine
Document Type
- Article (10)
- Doctoral Thesis (3)
- Postprint (2)
- Monograph/Edited Volume (1)
Is part of the Bibliography
- yes (16) (remove)
Keywords
- theory (16) (remove)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Geowissenschaften (2)
- Institut für Physik und Astronomie (2)
- Philosophische Fakultät (2)
- Department Erziehungswissenschaft (1)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Institut für Informatik und Computational Science (1)
- Institut für Künste und Medien (1)
- Institut für Mathematik (1)
- Institut für Philosophie (1)
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
The mirror stage is one of Jacques Lacan's most well-received metapsychological models in the English-speaking world. In its many renditions Lacan elucidates the different forms of identification that lead to the construction of the Freudian ego. This article utilizes Lacan's mirror stage to provide a novel perspective on autistic embodiment. It develops an integrative model that accounts for the progression of four distinct forms of autistic identification in the mirror stage; these forms provide the basis for the development of four different clinical trajectories in the treatment of autism. This model is posed as an alternative to the clinical and diagnostic framework associated with the autistic spectrum disorder.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
Epistemic logic programs constitute an extension of the stable model semantics to deal with new constructs called subjective literals. Informally speaking, a subjective literal allows checking whether some objective literal is true in all or some stable models. As it can be imagined, the associated semantics has proved to be non-trivial, since the truth of subjective literals may interfere with the set of stable models it is supposed to query. As a consequence, no clear agreement has been reached and different semantic proposals have been made in the literature. Unfortunately, comparison among these proposals has been limited to a study of their effect on individual examples, rather than identifying general properties to be checked. In this paper, we propose an extension of the well-known splitting property for logic programs to the epistemic case. We formally define when an arbitrary semantics satisfies the epistemic splitting property and examine some of the consequences that can be derived from that, including its relation to conformant planning and to epistemic constraints. Interestingly, we prove (through counterexamples) that most of the existing approaches fail to fulfill the epistemic splitting property, except the original semantics proposed by Gelfond 1991 and a recent proposal by the authors, called Founded Autoepistemic Equilibrium Logic.
Partial clones
(2020)
A set C of operations defined on a nonempty set A is said to be a clone if C is closed under composition of operations and contains all projection mappings. The concept of a clone belongs to the algebraic main concepts and has important applications in Computer Science. A clone can also be regarded as a many-sorted algebra where the sorts are the n-ary operations defined on set A for all natural numbers n >= 1 and the operations are the so-called superposition operations S-m(n) for natural numbers m, n >= 1 and the projection operations as nullary operations. Clones generalize monoids of transformations defined on set A and satisfy three clone axioms. The most important axiom is the superassociative law, a generalization of the associative law. If the superposition operations are partial, i.e. not everywhere defined, instead of the many-sorted clone algebra, one obtains partial many-sorted algebras, the partial clones. Linear terms, linear tree languages or linear formulas form partial clones. In this paper, we give a survey on partial clones and their properties.
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics that create a probabilistic model of the solution space, which is updated iteratively, based on the quality of the solutions sampled according to the model. As previous works show, this iteration-based perspective can lead to erratic updates of the model, in particular, to bit-frequencies approaching a random boundary value. In order to overcome this problem, we propose a new EDA based on the classic compact genetic algorithm (cGA) that takes into account a longer history of samples and updates its model only with respect to information which it classifies as statistically significant. We prove that this significance-based cGA (sig-cGA) optimizes the commonly regarded benchmark functions OneMax (OM), LeadingOnes, and BinVal all in quasilinear time, a result shown for no other EDA or evolutionary algorithm so far. For the recently proposed stable compact genetic algorithm-an EDA that tries to prevent erratic model updates by imposing a bias to the uniformly distributed model-we prove that it optimizes OM only in a time exponential in its hypothetical population size. Similarly, we show that the convex search algorithm cannot optimize OM in polynomial time.
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat, that is, the unnecessary growth of solution lengths, which may slow down the optimization process. So far, the mathematical runtime analysis could not deal well with bloat and required explicit assumptions limiting bloat.
In this paper, we provide the first mathematical runtime analysis of a GP algorithm that does not require any assumptions on the bloat. Previous performance guarantees were only proven conditionally for runs in which no strong bloat occurs. Together with improved analyses for the case with bloat restrictions our results show that such assumptions on the bloat are not necessary and that the algorithm is efficient without explicit bloat control mechanism.
More specifically, we analyzed the performance of the (1 + 1) GP on the two benchmark functions ORDER and MAJORITY. When using lexicographic parsimony pressure as bloat control, we show a tight runtime estimate of O(T-init + nlogn) iterations both for ORDER and MAJORITY. For the case without bloat control, the bounds O(T-init logT(i)(nit) + n(logn)(3)) and Omega(T-init + nlogn) (and Omega(T-init log T-init) for n = 1) hold for MAJORITY(1).
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation: the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts); and also the role and the realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had surprisingly little share in this work. <br /> We analyze a simple crossover operator in combination with randomized local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); we denote the resulting algorithm Concatenation Crossover GP. We consider three variants of the well-studied MAJORITY test function, adding large plateaus in different ways to the fitness landscape and thus giving a test bed for analyzing the interplay of variation operators and bloat control mechanisms in a setting with local optima. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control. (C) 2019 Elsevier B.V. All rights reserved.
Background: Students' self-concept of ability is an important predictor of their achievement emotions. However, little is known about how learning environments affect these interrelations.
Aims: Referring to Pekrun's control-value theory, this study investigated whether teacher-reported teaching quality at the classroom level would moderate the relation between student-level mathematics self-concept at the beginning of the school year and students' achievement emotions at the middle of the school year.
Sample: Data of 807 ninth and tenth graders (53.4% girls) and their mathematics teachers (58.1% male) were analysed.
Method: Students and teachers completed questionnaires at the beginning of the school year and at the middle of the school year. Multi-level modelling and cross-level interaction analyses were used to examine the longitudinal relations between self-concept, teacher-perceived teaching quality, and achievement emotions as well as potential interaction effects.
Results: Mathematics self-concept significantly and positively related to enjoyment in mathematics and negatively related to anxiety. Teacher-reported structuredness decreased students' anxiety. Mathematics self-concept only had a significant and positive effect on students' enjoyment at high levels of teacher-reported cognitive activation and at high levels of structuredness.
Conclusions: High teaching quality can be seen as a resource that strengthens the positive relations between academic self-concept and positive achievement emotions.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.
The simulation of broad-band (0.1 to 10 + Hz) ground-shaking over deep and spatially extended sedimentary basins at regional scales is challenging. We evaluate the ground-shaking of a potential M 6.5 earthquake in the southern Lower Rhine Embayment, one of the most important areas of earthquake recurrence north of the Alps, close to the city of Cologne in Germany. In a first step, information from geological investigations, seismic experiments and boreholes is combined for deriving a harmonized 3D velocity and attenuation model of the sedimentary layers. Three alternative approaches are then applied and compared to evaluate the impact of the sedimentary cover on ground-motion amplification. The first approach builds on existing response spectra ground-motion models whose amplification factors empirically take into account the influence of the sedimentary layers through a standard parameterization. In the second approach, site-specific 1D amplification functions are computed from the 3D basin model. Using a random vibration theory approach, we adjust the empirical response spectra predicted for soft rock conditions by local site amplification factors: amplifications and associated ground-motions are predicted both in the Fourier and in the response spectra domain. In the third approach, hybrid physics-based ground-motion simulations are used to predict time histories for soft rock conditions which are subsequently modified using the 1D site-specific amplification functions computed in method 2. For large distances and at short periods, the differences between the three approaches become less notable due to the significant attenuation of the sedimentary layers. At intermediate and long periods, generic empirical ground-motion models provide lower levels of amplification from sedimentary soils compared to methods taking into account site-specific 1D amplification functions. In the near-source region, hybrid physics-based ground-motions models illustrate the potentially large variability of ground-motion due to finite source effects.
The simulation of broad-band (0.1 to 10 + Hz) ground-shaking over deep and spatially extended sedimentary basins at regional scales is challenging. We evaluate the ground-shaking of a potential M 6.5 earthquake in the southern Lower Rhine Embayment, one of the most important areas of earthquake recurrence north of the Alps, close to the city of Cologne in Germany. In a first step, information from geological investigations, seismic experiments and boreholes is combined for deriving a harmonized 3D velocity and attenuation model of the sedimentary layers. Three alternative approaches are then applied and compared to evaluate the impact of the sedimentary cover on ground-motion amplification. The first approach builds on existing response spectra ground-motion models whose amplification factors empirically take into account the influence of the sedimentary layers through a standard parameterization. In the second approach, site-specific 1D amplification functions are computed from the 3D basin model. Using a random vibration theory approach, we adjust the empirical response spectra predicted for soft rock conditions by local site amplification factors: amplifications and associated ground-motions are predicted both in the Fourier and in the response spectra domain. In the third approach, hybrid physics-based ground-motion simulations are used to predict time histories for soft rock conditions which are subsequently modified using the 1D site-specific amplification functions computed in method 2. For large distances and at short periods, the differences between the three approaches become less notable due to the significant attenuation of the sedimentary layers. At intermediate and long periods, generic empirical ground-motion models provide lower levels of amplification from sedimentary soils compared to methods taking into account site-specific 1D amplification functions. In the near-source region, hybrid physics-based ground-motions models illustrate the potentially large variability of ground-motion due to finite source effects.
Der Sammelband umfasst die Beiträge des 10. Arbeitstreffens slavistischer Nachwuchswissenschaftler im Rahmen des Jungen Forums Slavistischer Literaturwissenschaft (JFSL), das vom 26. bis zum 28. März 2010 an der Universität Trier stattfand. Präsentiert wird ein Überblick über aktuelle Forschungsrichtungen und -themen der deutschsprachigen Slavistik, der trotz der weiter bestehenden Dominanz der Russistik eine zunehmende Tendenz zu Studien über verschiedene slavische Literaturen zeigt. Die Beiträge lassen sich in drei große Bereiche differenzieren: Der erste Teil ,Texturen' beinhaltet literaturwissenschaftliche Studien, die sich mit den textimannenten Effekten literarischer Werke auseinandersetzen. Der Text als Gewebe wird auf seine Fadendichte und -verkreuzung wie Entstehung und Tradierung bestimmter Motive und Topoi, Decodierung intertextueller Referenzen oder Allegorisierungs- und Symbolisierungprozesse hin analysiert. Der zweite Teil vereinigt unter dem Begriff ,Identitäten' Arbeiten aus dem Bereich der kulturwissenschaftlich orientierten Literaturwissenschaft, die mit Geschlechter-, Raum-, Erinnerungs- und postkolonialen Konzepten den Fragen der literarischen Identitätsgenese nachgehen. Untersucht werden ästhetische Umsetzungen von Machtdispositiven, Hierarchiebildungen und Ausschlussmechanismen. Die Beiträge des dritten Teils ,Theorien' reflektieren entweder die Literaturforschung und ihre Ästhetiktheorien oder unternehmen einen Theorieimport aus verschiedenen Disziplinen wie Philosophie, strukturalistische Psychoanalyse, Neuro-, Geschichts- oder Translationswissenschaften, die sich für die Analyse literarischer Texte als fruchtbar erweisen und damit das Literaturverständnis erweitern.