Refine
Document Type
- Article (9)
- Doctoral Thesis (3)
- Postprint (2)
- Monograph/Edited Volume (1)
Is part of the Bibliography
- yes (15)
Keywords
- theory (15) (remove)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Geowissenschaften (2)
- Institut für Physik und Astronomie (2)
- Philosophische Fakultät (2)
- Department Erziehungswissenschaft (1)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Institut für Künste und Medien (1)
- Institut für Mathematik (1)
- Institut für Philosophie (1)
- Institut für Slavistik (1)
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
Affect Disposition(ing)
(2018)
The “affective turn” has been primarily concerned not with what affect is, but what it does. This article focuses on yet another shift towards how affect gets organized, i.e., how it is produced, classified, and controlled. It proposes a genealogical as well as a critical approach to the organization of affect and distinguishes between several “affect disposition(ing) regimes”—meaning paradigms of how to interpret and manage affects, for e.g., encoding them as byproducts of demonic possession, judging them in reference to a moralistic framework, or subsuming them under an industrial regime. Bernard Stiegler’s concept of psychopower will be engaged at one point and expanded to include social media and affective technologies, especially Affective Computing. Finally, the industrialization and cybernetization of affect will be contrasted with poststructuralist interpretations of affects as events.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.
Background: Students' self-concept of ability is an important predictor of their achievement emotions. However, little is known about how learning environments affect these interrelations.
Aims: Referring to Pekrun's control-value theory, this study investigated whether teacher-reported teaching quality at the classroom level would moderate the relation between student-level mathematics self-concept at the beginning of the school year and students' achievement emotions at the middle of the school year.
Sample: Data of 807 ninth and tenth graders (53.4% girls) and their mathematics teachers (58.1% male) were analysed.
Method: Students and teachers completed questionnaires at the beginning of the school year and at the middle of the school year. Multi-level modelling and cross-level interaction analyses were used to examine the longitudinal relations between self-concept, teacher-perceived teaching quality, and achievement emotions as well as potential interaction effects.
Results: Mathematics self-concept significantly and positively related to enjoyment in mathematics and negatively related to anxiety. Teacher-reported structuredness decreased students' anxiety. Mathematics self-concept only had a significant and positive effect on students' enjoyment at high levels of teacher-reported cognitive activation and at high levels of structuredness.
Conclusions: High teaching quality can be seen as a resource that strengthens the positive relations between academic self-concept and positive achievement emotions.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation: the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts); and also the role and the realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had surprisingly little share in this work. <br /> We analyze a simple crossover operator in combination with randomized local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); we denote the resulting algorithm Concatenation Crossover GP. We consider three variants of the well-studied MAJORITY test function, adding large plateaus in different ways to the fitness landscape and thus giving a test bed for analyzing the interplay of variation operators and bloat control mechanisms in a setting with local optima. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control. (C) 2019 Elsevier B.V. All rights reserved.
Partial clones
(2020)
A set C of operations defined on a nonempty set A is said to be a clone if C is closed under composition of operations and contains all projection mappings. The concept of a clone belongs to the algebraic main concepts and has important applications in Computer Science. A clone can also be regarded as a many-sorted algebra where the sorts are the n-ary operations defined on set A for all natural numbers n >= 1 and the operations are the so-called superposition operations S-m(n) for natural numbers m, n >= 1 and the projection operations as nullary operations. Clones generalize monoids of transformations defined on set A and satisfy three clone axioms. The most important axiom is the superassociative law, a generalization of the associative law. If the superposition operations are partial, i.e. not everywhere defined, instead of the many-sorted clone algebra, one obtains partial many-sorted algebras, the partial clones. Linear terms, linear tree languages or linear formulas form partial clones. In this paper, we give a survey on partial clones and their properties.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
The simulation of broad-band (0.1 to 10 + Hz) ground-shaking over deep and spatially extended sedimentary basins at regional scales is challenging. We evaluate the ground-shaking of a potential M 6.5 earthquake in the southern Lower Rhine Embayment, one of the most important areas of earthquake recurrence north of the Alps, close to the city of Cologne in Germany. In a first step, information from geological investigations, seismic experiments and boreholes is combined for deriving a harmonized 3D velocity and attenuation model of the sedimentary layers. Three alternative approaches are then applied and compared to evaluate the impact of the sedimentary cover on ground-motion amplification. The first approach builds on existing response spectra ground-motion models whose amplification factors empirically take into account the influence of the sedimentary layers through a standard parameterization. In the second approach, site-specific 1D amplification functions are computed from the 3D basin model. Using a random vibration theory approach, we adjust the empirical response spectra predicted for soft rock conditions by local site amplification factors: amplifications and associated ground-motions are predicted both in the Fourier and in the response spectra domain. In the third approach, hybrid physics-based ground-motion simulations are used to predict time histories for soft rock conditions which are subsequently modified using the 1D site-specific amplification functions computed in method 2. For large distances and at short periods, the differences between the three approaches become less notable due to the significant attenuation of the sedimentary layers. At intermediate and long periods, generic empirical ground-motion models provide lower levels of amplification from sedimentary soils compared to methods taking into account site-specific 1D amplification functions. In the near-source region, hybrid physics-based ground-motions models illustrate the potentially large variability of ground-motion due to finite source effects.
The simulation of broad-band (0.1 to 10 + Hz) ground-shaking over deep and spatially extended sedimentary basins at regional scales is challenging. We evaluate the ground-shaking of a potential M 6.5 earthquake in the southern Lower Rhine Embayment, one of the most important areas of earthquake recurrence north of the Alps, close to the city of Cologne in Germany. In a first step, information from geological investigations, seismic experiments and boreholes is combined for deriving a harmonized 3D velocity and attenuation model of the sedimentary layers. Three alternative approaches are then applied and compared to evaluate the impact of the sedimentary cover on ground-motion amplification. The first approach builds on existing response spectra ground-motion models whose amplification factors empirically take into account the influence of the sedimentary layers through a standard parameterization. In the second approach, site-specific 1D amplification functions are computed from the 3D basin model. Using a random vibration theory approach, we adjust the empirical response spectra predicted for soft rock conditions by local site amplification factors: amplifications and associated ground-motions are predicted both in the Fourier and in the response spectra domain. In the third approach, hybrid physics-based ground-motion simulations are used to predict time histories for soft rock conditions which are subsequently modified using the 1D site-specific amplification functions computed in method 2. For large distances and at short periods, the differences between the three approaches become less notable due to the significant attenuation of the sedimentary layers. At intermediate and long periods, generic empirical ground-motion models provide lower levels of amplification from sedimentary soils compared to methods taking into account site-specific 1D amplification functions. In the near-source region, hybrid physics-based ground-motions models illustrate the potentially large variability of ground-motion due to finite source effects.