Refine
Has Fulltext
- yes (185) (remove)
Year of publication
Document Type
- Conference Proceeding (185) (remove)
Language
- English (185) (remove)
Keywords
- Information Structure (4)
- Cloud Computing (3)
- middleware (3)
- Constraint Solving (2)
- Deduction (2)
- EMOTIKON (2)
- Forschungsprojekte (2)
- Future SOC Lab (2)
- In-Memory Technologie (2)
- Linear Mixed Models (2)
Institute
- Extern (122)
- Institut für Künste und Medien (20)
- Institut für Physik und Astronomie (11)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (9)
- Institut für Geowissenschaften (9)
- Sonderforschungsbereich 632 - Informationsstruktur (7)
- Institut für Informatik und Computational Science (5)
- Department Linguistik (4)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
Integration of digital elevation models and satellite images to investigate geological processes.
(2006)
In order to better understand the geological boundary conditions for ongoing or past surface processes geologists face two important questions: 1) How can we gain additional knowledge about geological processes by analyzing digital elevation models (DEM) and satellite images and 2) Do these efforts present a viable approach for more efficient research. Here, we will present case studies at a variety of scales and levels of resolution to illustrate how we can substantially complement and enhance classical geological approaches with remote sensing techniques. Commonly, satellite and DEM based studies are being used in a first step of assessing areas of geologic interest. While in the past the analysis of satellite imagery (e.g. Landsat TM) and aerial photographs was carried out to characterize the regional geologic characteristics, particularly structure and lithology, geologists have increasingly ventured into a process-oriented approach. This entails assessing structures and geomorphic features with a concept that includes active tectonics or tectonic activity on time scales relevant to humans. In addition, these efforts involve analyzing and quantifying the processes acting at the surface by integrating different remote sensing and topographic data (e.g. SRTM-DEM, SSM/I, GPS, Landsat 7 ETM, Aster, Ikonos…). A combined structural and geomorphic study in the hyperarid Atacama desert demonstrates the use of satellite and digital elevation data for assessing geological structures formed by long-term (millions of years) feedback mechanisms between erosion and crustal bending (Zeilinger et al., 2005). The medium-term change of landscapes during hundred thousands to millions years in a more humid setting is shown in an example from southern Chile. Based on an analysis of rivers/watersheds combined with landscapes parameterization by using digital elevation models, the geomorphic evolution and change in drainage pattern in the coastal Cordillera can be quantified and put into the context of seismotectonic segmentation of a tectonically active region. This has far-reaching implications for earthquake rupture scenarios and hazard mitigation (K. Rehak, see poster on IMAF Workshop). Two examples illustrate short-term processes on decadal, centennial and millennial time scales: One study uses orogen scale precipitation gradients derived from remotely sensed passive microwave data (Bookhagen et al., 2005a). They demonstrate how debris flows were triggered as a response of slopes to abnormally strong rainfall in the interior parts of the Himalaya during intensified monsoons. The area of the orogen that receives high amounts of precipitation during intensified monsoons also constitutes numerous landslide deposits of up to 1km<sup>3 volume that were generated during intensified monsoon phase at about 27 and 9 ka (Bookhagen et al., 2005b). Another project in the Swiss Alps compared sets of aerial photographs recorded in different years. By calculating high resolution surfaces the mass transport in a landslide could be reconstructed (M. Schwab, Universität Bern). All these examples, although representing only a short and limited selection of projects using remote sense data in geology, have as a common approach the goal to quantify geological processes. With increasing data resolution and new sensors future projects will even enable us to recognize more patterns and / or structures indicative of geological processes in tectonically active areas. This is crucial for the analysis of natural hazards like earthquakes, tsunamis and landslides, as well as those hazards that are related to climatic variability. The integration of remotely sensed data at different spatial and temporal scales with field observations becomes increasingly important. Many of presently highly populated places and increasingly utilized regions are subject to significant environmental pressure and often constitute areas of concentrated economic value. Combined remote sensing and ground-truthing in these regions is particularly important as geologic, seismicity and hydrologic data may be limited here due to the recency of infrastructural development. Monitoring ongoing processes and evaluating the remotely sensed data in terms of recurrence of events will greatly enhance our ability to assess and mitigate natural hazards. <hr> Dokument 1: Foliensatz | Dokument 2: Abstract <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
In this paper, we present a finite-state approach to constituency and therewith an analysis of coordination phenomena involving so-called non-constituents. We show that non-constituents can be seen as parts of fully-fledged constituents and therefore be coordinated in the same way. We have implemented an algorithm based on finite state automata that generates an LFG grammar assigning valid analyses to non-constituent coordination structures in the German language.
Nested complementation plays an important role in expressing counter- i.e. star-free and first-order definable languages and their hierarchies. In addition, methods that compile phonological rules into finite-state networks use double-nested complementation or “double negation”. This paper reviews how the double-nested complementation extends to a relatively new operation, generalized restriction (GR), coined by the author (Yli-Jyrä and Koskenniemi 2004). This operation encapsulates a double-nested complementation and elimination of a concatenation marker, diamond, whose finite occurrences align concatenations in the arguments of the operation. The paper demonstrates that the GR operation has an interesting potential in expressing regular languages, various kinds of grammars, bimorphisms and relations. This motivates a further study of optimized implementation of the operator.
Generalized Two-Level Grammar (GTWOL) provides a new method for compilation of parallel replacement rules into transducers. The current paper identifies the role of generalized lenient composition (GLC) in this method. Thanks to the GLC operation, the compilation method becomes bipartite and easily extendible to capture various application modes. In the light of three notions of obligatoriness, a modification to the compilation method is proposed. We argue that the bipartite design makes implementation of parallel obligatoriness, directionality, length and rank based application modes extremely easy, which is the main result of the paper.
We analyze anaphoric phenomena in the context of building an input understanding component for a conversational system for tutoring mathematics. In this paper, we report the results of data analysis of two sets of corpora of dialogs on mathematical theorem proving. We exemplify anaphoric phenomena, identify factors relevant to anaphora resolution in our domain and extensions to the input interpretation component to support it.
The most massive stars are those with the shortest but most active life. One group of massive stars, the Luminous Blue Variables (LBVs), of which only a few objects are known, are in particular of interest concerning the stability of stars. They have a high mass loss rate and are close to being instable. This is even more likely as rotation becomes an important factor in stellar evolution of these stars. Through massive stellar winds and sometimes giant eruptions, LBV nebulae are formed. Various aspects in the evolution in the LBV phase lead, beside the large scale morphological and kinematical differences, to a diversity of small structures like clumps, rims, and outflows in these nebulae.
INTEGRAL tripled the number of super-giant high-mass X-ray binaries (sgHMXB) known in the Galaxy by revealing absorbed and fast transient (SFXT) systems. Quantitative constraints on the wind clumping of massive stars can be obtained from the study of the hard X-ray variability of SFXT. A large fraction of the hard X-ray emission is emitted in the form of flares with a typical duration of 3 ksec, frequency of 7 days and luminosity of $10^{36}$ erg/s. Such flares are most probably emitted by the interaction of a compact object orbiting at $\sim10~R_*$ with wind clumps ($10^{22 ... 23}$ g) representing a large fraction of the stellar mass-loss rate. The density ratio between the clumps and the inter-clump medium is $10^{2 ... 4}$. The parameters of the clumps and of the inter-clump medium, derived from the SFXT flaring behavior, are in good agreement with macro-clumping scenario and line-driven instability simulations. SFXT are likely to have larger orbital radius than classical sgHMXB.
We present the results of Monte Carlo mass-loss predictions for massive stars covering a wide range of stellar parameters. We critically test our predictions against a range of observed massloss rates – in light of the recent discussions on wind clumping. We also present a model to compute the clumping-induced polarimetric variability of hot stars and we compare this with observations of Luminous Blue Variables, for which polarimetric variability is larger than for O and Wolf-Rayet stars. Luminous Blue Variables comprise an ideal testbed for studies of wind clumping and wind geometry, as well as for wind strength calculations, and we propose they may be direct supernova progenitors.
The interdisciplinary workshop STOCHASTIC PROCESSES WITH APPLICATIONS IN THE NATURAL SCIENCES was held in Bogotá, at Universidad de los Andes from December 5 to December 9, 2016. It brought together researchers from Colombia, Germany, France, Italy, Ukraine, who communicated recent progress in the mathematical research related to stochastic processes with application in biophysics.
The present volume collects three of the four courses held at this meeting by Angelo Valleriani, Sylvie Rœlly and Alexei Kulik.
A particular aim of this collection is to inspire young scientists in setting up research goals within the wide scope of fields represented in this volume.
Angelo Valleriani, PhD in high energy physics, is group leader of the team "Stochastic processes in complex and biological systems" from the Max-Planck-Institute of Colloids and Interfaces, Potsdam.
Sylvie Rœlly, Docteur en Mathématiques, is the head of the chair of Probability at the University of Potsdam.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
Recent studies of massive O-type stars present clear evidences of inhomogeneous and clumped winds. O-type (H-rich) central stars of planetary nebulae (CSPNs) are in some ways the low mass–low luminosity analogous of those massive stars. In this contribution, we present preliminary results of our on-going multi-wavelength (FUV, UV and optical) study of the winds of Galactic CSPNs. Particular emphasis will be given to the clumping factors derived by means of optical lines (Hα and Heii 4686) and “classic” FUV (and UV) lines.
Magnetic fields influence the dynamics of hot-star winds and create large scale structure. Based on numerical magnetohydrodynamic (MHD) simulations, we model the wind of θ¹ Ori C, and then use the SEI method to compute synthetic line profiles for a range of viewing angles as function of rotational phase. The resulting dynamic spectrum for a moderately strong line shows a distinct modulation, but with a phase that seems at odds with available observations.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
While there is strong evidence for clumping in the winds of massive hot stars, very little is known about clumping in the winds from Central Stars. We have checked [WC]-type CSPN winds for clumping by inspecting the electron-scattering line wings. At least for three stars we found indications for wind inhomogeneities.
Scrambling and interfaces
(2013)
This paper proposes a novel analysis of the Russian OVS construction and argues that the parametric variation in the availability of OVS cross-linguistically depends on the type of relative interpretative argument prominence that a language encodes via syntactic structure. When thematic and information-structural prominence relations do not coincide, only one of them can be structurally/linearly represented. The relation that is not structurally/linearly encoded must be made visible at the PF interface either via prosody or morphology.
Claiming that cross-speaker "but" can signal correction in dialogue, we start by describing the types of corrections "but" can communicate by focusing on the Speech Act (SA) communicated in the previous turn and address the ways in which "but" can correct what is communicated. We address whether "but" corrects the proposition, the direct SA or the discourse relation communicated in the previous turn. We will also briefly address other relations signalled by cross-turn "but". After presenting a typology of the situations "but" can correct, we will address how these corrections can be modelled in the Information State model of dialogue, motivating this work by showing how it can be used to potentially avoid misunderstandings. We wrap up by showing how the model presented here updates beliefs in the Information State representation of the dialogue and can be used to facilitate response deliberation.
Throughout the years 2020 and 2021, schools were temporarily closed to slow the spread of SarsCoV-2. For some periods, children were locked out of sports in schools (physical education lessons, school sports working groups) and organized sports in sports clubs which often resulted in physical inactivity. Did these restrictions affect children’s physical fitness? The EMOTIKON project (www.uni-potsdam.de/emotikon) annually assesses the physical fitness (cardiorespiratory endurance [6-minute-run test], coordination [star-run test], speed [20-m sprint test], lower [standing long jump test] and upper [ball push test] limbs muscle power, and balance [one-legged stance test]) of all third graders in the Federal State of Brandenburg, Germany. Participation is mandatory for all public primary schools. In the falls from 2016 to 2021, 83,476 keyage children (i.e., school enrollment according to the legal key date, between eight and nine years in third grade) from 512 schools were assessed with the EMOTIKON test battery. We tested the Covid pandemic effect on a composite score of the four highly correlated physical fitness tests assessing cardiorespiratory endurance, coordination, speed and powerLOW and on another composite score of the three running tests (cardiorespiratory endurance, coordination, speed), as well as separately on all six physical fitness components. Secular trends for each of the physical fitness components and differences between schools and children were taken into account in linear mixed models. We found a negative Covid pandemic effect on the two composite physical fitness scores, as well as on cardiorespiratory endurance, coordination, and speed. We found a positive Covid pandemic effect on powerLOW. Coordination was associated with the largest negative Covid pandemic effect, also passing the threshold of smallest meaningful change (SMC, i.e., 0.2 Cohen’s d) when accumulated across two years. Given the educational context, Covid pandemic effects were also compared relative to the expected age-related development of the physical fitness components between eight and nine years. The Covid pandemic-related developmental costs/gains ranged from three to seven months relative to a longitudinal age effect, and from five to 17 months relative to a cross-sectional age effect. We propose that a longitudinal assessment yields a more reliable estimate of the developmental (age-related) gain than a cross-sectional one. Therefore, we consider the smaller Covid pandemic-related developmental costs/gains to be more credible. Interestingly, on the school level, „fitter” schools (relatively higher Grand Mean) exhibited larger negative Covid pandemic effects than schools with a lower physical fitness score. Negative Covid pandemic effects for the three run tasks were also found by Bähr et al. (2022), who tested the physical fitness of 16,496 Thuringian third-graders from 292 schools with the same six physical fitness tests used in EMOTIKON. Our results may be used to prioritize health-related interventions.
Luminous Blue Variables show strong changes in their stellar wind on time scales of typically years to decades when they expand and contract radially at approximately constant luminosity. Micro-variability on shorter time scales and amplitudes can be observed superimposed to the larger scale radial changes. I will show long-term time series of high resolution spectra which we have collected in the past 20 years for many of the well known LBVs together with a few time series of weekly sampling (HR Car, R40, R71, R110, R127, S Dor) covering a time windows of up to a few months. Wind variability is seen on short and intermediate time scales with the line profiles changing from P Cygni to inverse P Cygni and double peeked profiles sometimes for the same star and spectral line. On longer time scales the ionisation levels for all chemical elements change drastically due to the strong change of the temperature on the stellar surface. While on the long term the characteristic radial changes may have impact on the over all mass loss rates, the variabilities and asymmetries on short and intermediate time scales may cause false estimates of the mass loss rates when confronting models with the observed line profiles
Aspect-oriented middleware is a promising technology for the realisation of dynamic reconfiguration in heterogeneous distributed systems. However, like other dynamic reconfiguration approaches, AO-middleware-based reconfiguration requires that the consistency of the system is maintained across reconfigurations. AO-middleware-based reconfiguration is an ongoing research topic and several consistency approaches have been proposed. However, most of these approaches tend to be targeted at specific contexts, whereas for distributed systems it is crucial to cover a wide range of operating conditions. In this paper we propose an approach that offers distributed, dynamic reconfiguration in a consistent manner, and features a flexible framework-based consistency management approach to cover a wide range of operating conditions. We evaluate our approach by investigating the configurability and transparency of our approach and also quantify the performance overheads of the associated consistency mechanisms.
Recent models of Information Structure (IS) identify a low level contrast feature that functions within the topic and focus of the utterance. This study investigates the exact nature of this feature based on empirical evidence from a controlled read speech experiment on the prosodic realization of different levels of contrast in Modern Greek. Results indicate that only correction is truly contrastive, and that it is similarly realized in both topic and focus, suggesting that contrast is an independent IS dimension. Non default focus position is further identified as a parameter that triggers a prosodically marked rendition, similar to correction.
We report on new mass-loss rate estimates for O stars in six massive binaries using the amplitude of orbital-phase dependent, linear-polarimetric variability caused by electron scattering off free electrons in the winds. Our estimated mass-loss rates for luminous O stars are independent of clumping. They suggest similar clumping corrections as for WR stars and do not support the recently proposed reduction in mass-loss rates of O stars by one or two orders of magnitude.
The James Webb Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2013. JWST will find the first stars and galaxies that formed in the early universe, connecting the Big Bang to our own Milky Way galaxy. JWST will peer through dusty clouds to see stars forming planetary systems, connecting the MilkyWay to our own Solar System. JWST’s instruments are designed to work primarily in the infrared range of 1 - 28 μm, with some capability in the visible range. JWST will have a large mirror, 6.5 m in diameter, and will be diffraction-limited at 2 μm (0.1 arcsec resolution). JWST will be placed in an L2 orbit about 1.5 million km from the Earth. The instruments will provide imaging, coronography, and multi-object and integral-field spectroscopy across the 1 - 28 μm wavelength range. The breakthrough capabilities of JWST will enable new studies of massive star winds from the Milky Way to the early universe.
I discuss observational evidence – independent of the direct spectral diagnostics of stellar winds themselves – suggesting that mass-loss rates for O stars need to be revised downward by roughly a factor of three or more, in line with recent observed mass-loss rates for clumped winds. These independent constraints include the large observed mass-loss rates in LBV eruptions, the large masses of evolved massive stars like LBVs and WNH stars, WR stars in lower metallicity environments, observed rotation rates of massive stars at different metallicity, supernovae that seem to defy expectations of high mass-loss rates in stellar evolution, and other clues. I pay particular attention to the role of feedback that would result from higher mass-loss rates, driving the star to the Eddington limit too soon, and therefore making higher rates appear highly implausible. Some of these arguments by themselves may have more than one interpretation, but together they paint a consistent picture that steady line-driven winds of O-type stars have lower mass-loss rates and are significantly clumped.
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Playing with information : how political games encourage the player to cross the magic circle
(2008)
The concept of the magic circle suggests that the experience of play is separated from reality. However, in order to interact with a game’s rule system, the player has to make meaningful interpretations of its representations – and representations are never neutral. Games with political content refer in their representations explicitly to social discourses. Cues within their representational layers provoke the player to link the experience of play to mental concepts of reality.
We present preliminary results of a tailored atmosphere analysis of six Galactic WC stars using UV, optical, and mid-infrared Spitzer IRS data. With these data, we are able to sample regions from 10 to 10³ stellar radii, thus to determine wind clumping in different parts of the wind. Ultimately, derived wind parameters will be used to accuratelymeasure neon abundances, and to so test predicted nuclear-reaction rates.
Morphological analyses based on word syntax approaches can encounter difficulties with long distance dependencies. The reason is that in some cases an affix has to have access to the inner structure of the form with which it combines. One solution is the percolation of features from ther inner morphemes to the outer morphemes with some process of feature unification. However, the obstacle of percolation constraints or stipulated features has lead some linguists to argue in favour of other frameworks such as, e.g., realizational morphology or parallel approaches like optimality theory. This paper proposes a linguistic analysis of two long distance dependencies in the morphology of Russian verbs, namely secondary imperfectivization and deverbal nominalization.We show how these processes can be reanalysed as local dependencies. Although finitestate frameworks are not bound by such linguistically motivated considerations, we present an implementation of our analysis as proposed in [1] that does not complicate the grammar or enlarge the network unproportionally.
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
In a production experiment and two follow-up perception experiments on read German we investigated the (de-)coding of discourse-new, inferentially and textually accessible and given discourse referents by prosodic means. Results reveal that a decrease in the referent’s level of givenness is reflected by an increase in its prosodic prominence (expressed by differences in the status and type of accent used) providing evidence for the relevance of different intermediate types of information status between the poles given and new. Furthermore, perception data indicate that the degree of prosodic prominence can serve as the decisive cue for decoding a referent’s level of givenness.
We present one-dimensional, time-dependent models of the clumps generated by the linedeshadowing instability. In order to follow the clumps out to distances of more than 1000 R∗, we use an efficient moving-box technique. We show that, within the approximations, the wind can remain clumped well into the formation region of the radio continuum.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
Gamma-rays can be produced by the interaction of a relativistic jet and the matter of the stellar wind in the subclass of massive X-ray binaries known as “microquasars”. The relativistic jet is ejected from the surroundings of the compact object and interacts with cold protons from the stellar wind, producing pions that then quickly decay into gamma-rays. Since the resulting gamma-ray emissivity depends on the target density, the detection of rapid variability in microquasars with GLAST and the new generation of Cherenkov imaging arrays could be used to probe the clumped structure of the stellar wind. In particular, we show here that the relative fluctuation in gamma rays may scale with the square root of the ratio of porosity length to binary separation, $\sqrt{h/a}$, implying for example a ca. 10% variation in gamma ray emission for a quite moderate porosity, h/a ∼ 0.01.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Massive stars usually form groups such as OB associations. Their fast stellar winds sweep up collectively the surrounding insterstellar medium (ISM) to generate superbubbles. Observations suggest that superbubble evolution on the surrounding ISM can be very irregular. Numerical simulations considering these conditions could help to understand the evolution of these superbubbles and to clarify the dynamics of these objects as well as the difference between observed X-ray luminosities and the predicted ones by the standard model (Weaver et al. 1977).
The H.E.S.S. collaboration recently reported the discovery of VHE γ-ray emission coincident with the young stellar cluster Westerlund 2. This system is known to host a population of hot, massive stars, and, most particularly, the WR binary WR 20a. Particle acceleration to TeV energies in Westerlund 2 can be accomplished in several alternative scenarios, therefore we only discuss energetic constraints based on the total available kinetic energy in the system, the actual mass loss rates of respective cluster members, and implied gamma-ray production from processes such as inverse Compton scattering or neutral pion decay. From the inferred gammaray luminosity of the order of 1035erg/s, implications for the efficiency of converting available kinetic energy into non-thermal radiation associated with stellar winds in the Westerlund 2 cluster are discussed under consideration of either the presence or absence of wind clumping.
Observational evidence exists that winds of massive stars are clumped. Many massive star systems are known as non-thermal particle production sites, as indicated by their synchrotron emission in the radio band. As a consequence they are also considered as candidate sites for non-thermal high-energy photon production up to gamma-ray energies. The present work considers the effects of wind clumpiness expected on the emitting relativistic particle spectrum in colliding wind systems, built up from the pool of thermal wind particles through diffusive particle acceleration, and taking into account inverse Compton and synchrotron losses. In comparison to a homogeneous wind, a clumpy wind causes flux variations of the emitting particle spectrum when the clump enters the wind collision region. It is found that the spectral features associated with this variability moves temporally from low to high energy bands with the time shift between any two spectral bands being dependent on clump size, filling factor, and the energy-dependence of particle energy gains and losses.
Fluvial systems are one of the major features shaping a landscape. They adjust to the prevailing tectonic and climatic setting and therefore are very sensitive markers of changes in these systems. If their response to tectonic and climatic forcing is quantified and if the climatic signal is excluded, it is possible to derive a local deformation history. Here, we investigate fluvial terraces and erosional surfaces in the southern Chilean forearc to assess a long-term geomorphic and hence tectonic evolution. Remote sensing and field studies of the Nahuelbuta Range show that the long-term deformation of the Chilean forearc is manifested by breaks in topography, sequences of differentially uplifted marine, alluvial and strath terraces as well as tectonically modified river courses and drainage basins. We used SRTM-90-data as basic elevation information for extracting and delineating drainage networks. We calculated hypsometric curves as an indicator for basin uplift, stream-length gradient indices to identify stream segments with anomalous slopes, and longitudinal river profiles as well as DS-plots to identify knickpoints and other anomalies. In addition, we investigated topography with elevation-slope graphs, profiles, and DEMs to reveal erosional surfaces. During the first field trip we already measured palaeoflow directions, performed pebble counting and sampled the fluvial terraces in order to apply cosmogenic nuclide dating (<sup>10Be, <sup>26Al) as well as provenance analyses. Our preliminary analysis of the Coastal Cordillera indicates a clear segmentation between the northern and southern parts of the Nahuelbuta Range. The Lanalhue Fault, a NW-SE striking fault zone oblique to the plate boundary, defines the segment boundary. Furthermore, we find a complex drainage re-organisation including a drainage reversal and wind gap on the divide between the Tirúa and Pellahuén basins east of the town Tirúa. The coastal basins lost most of their Andean sediment supply areas that existed in Tertiary and in part during early Pleistocene time. Between the Bío-Bío and Imperial rivers no Andean river is recently capable to traverse the Coastal Cordillera, suggesting ongoing Quaternary uplift of the entire range. From the spatial distribution of geomorphic surfaces in this region two uplift signals may be derived: (1) a long-term differential uplift process, active since the Miocene and possibly caused by underplating of subducted trench sediments, (2) a younger, local uplift affecting only the northern part of the Nahuelbuta Range that may be caused by the interaction of the forearc with the subduction of the Mocha Fracture Zone at the latitude of the Arauco peninsula. Our approach thus provides results in our attempt to decipher the characteristics of forearc development of active convergent margins using long-term geomorphic indicators. Furthermore, it is expected that our ongoing assessment will constrain repeatedly active zones of deformation. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Avatime, a Kwa language of Ghana, has an additive particle tsyɛ that at first sight looks similar to additive particles such as too and also in English. However, on closer inspection, the Avatime particle behaves differently. Contrary to what is usually claimed about additive particles, tsyɛ does not only associate with focused elements. Moreover, unlike its English equivalents, tsyɛ does not come with a requirement of identity between the expressed proposition and an alternative. Instead, it indicates that the proposition it occurs in is similar to or compatible with a presupposed alternative proposition.
Clumping in O-star winds
(2007)
Discussion : X-rays
(2007)
We exploit time-series $FUSE$ spectroscopy to {\it uniquely} probe spatial structure and clumping in the fast wind of the central star of the H-rich planetary nebula NGC~6543 (HD~164963). Episodic and recurrent optical depth enhancements are discovered in the P{\sc v} absorption troughs, with some evidence for a $\sim$ 0.17-day modulation time-scale. The characteristics of these features are essentially identical to the discrete absorption components' (DACs) commonly seen in the UV lines of massive OB stars, suggesting the temporal structures seen in NGC~6543 likely have a physical origin that is similar to that operating in massive, luminous stars. The mechanism for forming coherent perturbations in the outflows is therefore apparently operating equally in the radiation-pressure-driven winds of widely differing momenta ($\mdot$$v_\infty$$R_\star^{0.5}$) and flow times, as represented by OB stars and CSPN.
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
This paper focuses on the way computer games refer to the context of their formation and ask how they might stimulate the user’s understanding of the world around him. The central question is: Do computer games have the potential to inspire our reflection about moral and ethical issues? And if so, by which means do they achieve this? Drawing on concepts of the ethical criticism in literary studies as proposed by Wayne C. Booth and Martha Nussbaum, I will argue in favor of an ethical criticism for computer games. Two aspects will be brought into focus: the ethical reflection in the artifact as a whole, and the recipient’s emotional involvement. The paper aims at evaluating the interaction of game content and game structure in order to give an adequate insight into the way computer games function and affect us.
The emergence of information extraction (IE) oriented pattern engines has been observed during the last decade. Most of them exploit heavily finite-state devices. This paper introduces ExPRESS – a new extraction pattern engine, whose rules are regular expressions over flat feature structures. The underlying pattern language is a blend of two previously introduced IE oriented pattern formalisms, namely, JAPE, used in the widely known GATE system, and the unificationbased XTDL formalism used in SProUT. A brief and technical overview of ExPRESS, its pattern language and the pool of its native linguistic components is given. Furthermore, the implementation of the grammar interpreter is addressed too.
This paper explores the role of the intentional stance in games, arguing that any question of artificial intelligence has as much to do with the co-option of the player’s interpretation of actions as intelligent as any actual fixed-state systems attached to agents. It demonstrates how simply using a few simple and, in system terms, cheap tricks, existing AI can be both supported and enhanced. This includes representational characteristics, importing behavioral expectations from real life, constraining these expectations using diegetic devices, and managing social interrelationships to create the illusion of a greater intelligence than is ever actually present. It is concluded that complex artificial intelligence is often of less importance to the experience of intelligent agents in play than the creation of a space where the intentional stance can be evoked and supported.
We present an extension to a comprehensive context model that has been successfully employed in a number of practical conversational dialogue systems. The model supports the task of multimodal fusion as well as that of reference resolution in a uniform manner. Our extension consists of integrating implicitly mentioned concepts into the context model and we show how they serve as candidates for reference resolution.
In honour of Seymour Papert
(2018)
Forth is nice and flexible but to a philosopher and teacher educator Logo is the more impressing language. Both are relatives of Lisp, but Forth has a reverse Polish notation where as Logo has an infix notation. Logo allows top down programming, Forth only bottom up. Logo enables recursive programming, Forth does not. Logo includes turtle graphics, Forth has nothing comparable. So what to do if you can't get Logo and have no information about its inner architecture? This should be a case of "empirical modelling": How can you model observable results of the behaviour of Logo in terms of Forth? The main steps to solve this problem are shown in the first part of the paper.
The second part of the paper discusses the problem of modelling and shows that the modelling of making and the modelling of recognition have the same mathematical structure. So "empirical modelling" can also serve for modelling desired behaviour of technical systems.
The last part of the paper will show that the heuristic potential of a problem which should be modeled is more important than the programming language. The Picasso construal shows, in a very simple way, how children of different ages can model emotional relations in human behaviour with a simple Logo system.
Large open-source software projects involve developers with a wide variety of backgrounds and expertise. Such software projects furthermore include many internal APIs that developers must understand and use properly. According to the intended purpose of these APIs, they are more or less frequently used, and used by developers with more or less expertise. In this paper, we study the impact of usage patterns and developer expertise on the rate of defects occurring in the use of internal APIs. For this preliminary study, we focus on memory management APIs in the Linux kernel, as the use of these has been shown to be highly error prone in previous work. We study defect rates and developer expertise, to consider e.g., whether widely used APIs are more defect prone because they are used by less experienced developers, or whether defects in widely used APIs are more likely to be fixed.
In this work an extension of CSSR algorithm using Maximum Entropy Models is introduced. Preliminary experiments to perform Named Entity Recognition with this new system are presented.
Dynamical simulation of the “velocity-porosity” reduction in observed strength of stellar wind lines
(2007)
I use dynamical simulations of the line-driven instability to examine the potential role of the resulting flow structure in reducing the observed strength of wind absorption lines. Instead of the porosity length formalism used to model effects on continuum absorption, I suggest reductions in line strength can be better characterized in terms of a velocity clumping factor that is insensitive to spatial scales. Examples of dynamic spectra computed directly from instability simulations do exhibit a net reduction in absorption, but only at a modest 10-20% level that is well short of the ca. factor 10 required by recent analyses of PV lines.
X-ray spectroscopy is a sensitive probe of stellar winds. X-rays originate from optically thin shock-heated plasma deep inside the wind and propagate outwards throughout absorbing cool material. Recent analyses of the line ratios from He-like ions in the X-ray spectra of O-stars highlighted problems with this general paradigm: the measured line ratios of highest ions are consistent with the location of the hottest X-ray emitting plasma very close to the base of the wind, perhaps indicating the presence of a corona, while measurements from lower ions conform with the wind-embedded shock model. Generally, to correctly model the emerging Xray spectra, a detailed knowledge of the cool wind opacities based on stellar atmosphere models is prerequisite. A nearly grey stellar wind opacity for the X-rays is deduced from the analyses of high-resolution X-ray spectra. This indicates that the stellar winds are strongly clumped. Furthermore, the nearly symmetric shape of X-ray emission line profiles can be explained if the wind clumps are radially compressed. In massive binaries the orbital variations of X-ray emission allow to probe the opacity of the stellar wind; results support the picture of strong wind clumping. In high-mass X-ray binaries, the stochastic X-ray variability and the extend of the stellar-wind part photoionized by X-rays provide further strong evidence that stellar winds consist of dense clumps.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
We present the tool Kato which is, to the best of our knowledge, the first tool for plagiarism detection that is directly tailored for answer-set programming (ASP). Kato aims at finding similarities between (segments of) logic programs to help detecting cases of plagiarism. Currently, the tool is realised for DLV programs but it is designed to handle various logic-programming syntax versions. We review basic features and the underlying methodology of the tool.
The optical spectrum of Eta Carinae (η Car) is prominent in H I, He i and Fe ii wind lines, all of which vary both in absorption and emission with phase. The phase dependance is a consequence of the interaction between the two objects in the η Car binary (η Car A & B). The binary system is enshrouded by ejecta from previous mass ejection events and consequently, η Car B is not directly observable. We have traced the He i lines over η Car’s spectroscopic period, using HST/STIS data obtained with medium spectral, but high angular, resolving power, and created a radial velocity curve for the system. The He I lines are formed in the core of the system, and appear to be a composite of multiple features formed in spatially separated regions. The sources of their irregular line profiles are still not fully understood, but can be attributed to emission/absorption near the wind-wind interface and/or a direct consequence of the η Car A’s, massive, clumpy wind. This paper will discuss the spectral variability, the narrow emission structure of the He i lines and how clumpiness of the winds may impede the construction of the reliable radial velocity curve, necessary for characterizations of especially η Car B.
Metacommunicative circles
(2008)
The paper uses Gregory Bateson’s concept of metacommunication to explore the boundaries of the ‘magic circle’ in play and computer games. It argues that the idea of a self-contained “magic circle” ignores the constant negotiations among players which establish the realm of play. The “magic circle” is no fixed ontological entity but is set up by metacommunicative play. The paper further pursues the question if metacommunication could also be found in single-player computer games, and comes to the conclusion that metacommunication is implemented in single-player games by the means of metalepsis.
This first volume of the DIGAREC Series holds the proceedings of the conference “The Philosophy of Computer Games”, held at the University of Potsdam from May 8-10, 2008. The contributions of the conference address three fields of computer game research that are philosophically relevant and, likewise, to which philosophical reflection is crucial. These are: ethics and politics, the action-space of games, and the magic circle. All three topics are interlinked and constitute the paradigmatic object of computer games: Whereas the first describes computer games on the outside, looking at the cultural effects of games as well as on moral practices acted out with them, the second describes computer games on the inside, i.e. how they are constituted as a medium. The latter finally discusses the way in which a border between these two realms, games and non-games, persists or is already transgressed in respect to a general performativity.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Extending Alexander Galloway’s analysis of the action-image in videogames, this essay explores the concept in relation to its source: the analysis of cinema by the French philosopher Gilles Deleuze. The applicability of the concept to videogames will, therefore, be considered through a comparison between the First Person Shooter S.T.A.L.K.E.R. and Andrey Tarkovsky’s film Stalker. This analysis will compellingly explore the nature of videogame-action, its relation to player-perceptions and its location within the machinic and ludic schema.
Hα observations of Rigel obtained on 184 nights during the past ten years with the 1-m telescope and ´echelle spectrograph of Ritter Observatory are surveyed. The line profiles were classified in terms of morphology. About 1/4 of them are of P Cygni type, about 15% inverse P Cygni, about 25% double-peaked, about 1/3 pure absorption, and a few are single emission lines. Transformation of the profile from one type to another typically takes a few days. Although the line stays in absorption for extended intervals, only one high-velocity absorption event of the intensity reported by Kaufer et al. (1996a) was observed, in late 2006. Late in this event, Hα absorption occurred farther to the red than the red wing of a plausible photospheric absorption component, an indication of infalling material. In general, as the absorption events come to an end, the emission typically returns with an inverse P Cygni profile. The Hα profile class shows no obvious correlation with the radial velocity of C II λ6578, a photospheric absorption line.
General Discussion
(2007)
In the old days (pre ∼1990) hot stellar winds were assumed to be smooth, which made life fairly easy and bothered no one. Then after suspicious behaviour had been revealed, e.g. stochastic temporal variability in broadband polarimetry of single hot stars, it took the emerging CCD technology developed in the preceding decades (∼1970-80’s) to reveal that these winds were far from smooth. It was mainly high-S/N, time-dependent spectroscopy of strong optical recombination emission lines in WR, and also a few OB and other stars with strong hot winds, that indicated all hot stellar winds likely to be pervaded by thousands of multiscale (compressible supersonic turbulent?) structures, whose driver is probably some kind of radiative instability. Quantitative estimates of clumping-independent mass-loss rates came from various fronts, mainly dependent directly on density (e.g. electron-scattering wings of emission lines, UV spectroscopy of weak resonance lines, and binary-star properties including orbital-period changes, electron-scattering, and X-ray fluxes from colliding winds) rather than the more common, easier-to-obtain but clumping-dependent density-squared diagnostics (e.g. free-free emission in the IR/radio and recombination lines, of which the favourite has always been Hα). Many big questions still remain, such as: What do the clumps really look like? Do clumping properties change as one recedes from the mother star? Is clumping universal? Does the relative clumping correction depend on $\dot{M}$ itself?
This paper approaches the debate over the notion of “magic circle” through an exploratory analysis of the unfolding of identities/differences in gameplay through Derrida’s différance. Initially, différance is related to the notion of play and identity/difference in Derrida’s perspective. Next, the notion of magic circle through Derrida’s play is analyzed, emphasizing the dynamics of différance to understand gameplay as process; questioning its boundaries. Finally, the focus shifts toward the implications of the interplay of identities and differences during gameplay.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
We review the effects of clumping on the profiles of resonance doublets. By allowing the ratio of the doublet oscillator strenghts to be a free parameter, we demonstrate that doublet profiles contain more information than is normally utilized. In clumped (or porous) winds, this ratio can lies between unity and the ratio of the f-values, and can change as a function of velocity and time, depending on the fraction of the stellar disk that is covered by material moving at a particular velocity at a given moment. Using these insights, we present the results of SEI modeling of a sample of B supergiants, ζ Pup and a time series for a star whose terminal velocity is low enough to make the components of its Si VIλλ1400 independent. These results are interpreted within the framewrok of the Oskinova et al. (2007) model, and demonstrate how the doublet profiles can be used to extract infromation about wind structure.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
This paper suggests an approach to studying the rhetoric of persuasive computer games through comparative analysis. A comparison of the military propaganda game AMERICA’S ARMY to similar shooter games reveals an emphasis on discipline and constraints in all main aspects of the games, demonstrating a preoccupation with ethos more than pathos. Generalizing from this, a model for understanding game rhetoric through balances of freedom and constraints is proposed.
Both Alpine and Mediterranean areas are considered sensitive to so-called global change, considered as the combination of climate and land use changes. All panels on climate evolution predict future scenarios of increasing frequency and magnitude of floods which are likely to lead to huge geomorphic adjustments of river channels so major metamorphosis of fluvial systems is expected as a result of global change. Such pressures are likely to give rise to major ecological and economic changes and challenges that governments need to address as a matter of priority. Changes in river flow regimes associated with global change are therefore ushering in a new era, where there is a critical need to evaluate hydro-geomorphological hazards from headwaters to lowland areas (flooding can be not just a problem related to being under the water). A key question is how our understanding of these hazards associated with global change can be improved; improvement has to come from integrated research which includes the climatological and physical conditions that could influence the hydrology and sediment generation and hence the conveyance of water and sediments (including the river’s capacity, i.e. amount of sediment, and competence, i.e. channel deformation) and the vulnerabilities and economic repercussions of changing hydrological hazards (including the evaluation of the hydro-geomorphological risks too).
Within this framework, the purpose of this international symposium is to bring together researchers from several disciplines as hydrology, fluvial geomorphology, hydraulic engineering, environmental science, geography, economy (and any other related discipline) to discuss the effects of global change over the river system in relation with floods. The symposium is organized by means of invited talks given by prominent experts, oral lectures, poster sessions and discussion sessions for each individual topic; it will try to improve our understanding of how rivers are likely to evolve as a result of global change and hence address the associated hazards of that fluvial environmental change concerning flooding.
Four main topics are going to be addressed:
- Modelling global change (i.e. climate and land-use) at relevant spatial (regional, local) and temporal (from the long-term to the single-event) scales.
- Measuring and modelling river floods from the hydrological, sediment transport (both suspended and bedload) and channel morphology points of view at different spatial (from the catchment to the reach) and temporal (from the long-term to the single-event) scales.
- Evaluation and assessment of current and future river flooding hazards and risks in a global change perspective.
- Catchment management to face river floods in a changing world.
We are very pleased to welcome you to Potsdam. We hope you will enjoy your participation at the International Symposium on the Effects of Global Change on Floods, Fluvial Geomorphology and Related Hazards in Mountainous Rivers and have an exciting and profitable experience. Finally, we would like to thank all speakers, participants, supporters, and sponsors for their contributions that for sure will make of this event a very remarkable and fruitful meeting. We acknowledge the valuable support of the European Commission (Marie Curie Intra-European Fellowship, Project ‘‘Floodhazards’’, PIEF-GA-2013-622468, Seventh EU Framework Programme) and the Deutschen Forschungsgemeinschaft (Research Training Group “Natural Hazards and Risks in a Changing World” (NatRiskChange; GRK 2043/1) as the symposium would not have been possible without their help. Without your cooperation, this symposium would not be either possible or successful.
We apply the 3-dimensional radiative transport codeWind3D to 3D hydrodynamic models of Corotating Interaction Regions to fit the detailed variability of Discrete Absorption Components observed in Si iv UV resonance lines of HD 64760 (B0.5 Ib). We discuss important effects of the hydrodynamic input parameters on these large-scale equatorial wind structures that determine the detailed morphology of the DACs computed with 3D transfer. The best fit model reveals that the CIR in HD 64760 is produced by a source at the base of the wind that lags behind the stellar surface rotation. The non-corotating coherent wind structure is an extended density wave produced by a local increase of only 0.6% in the smooth symmetric wind mass-loss rate.
Most play spaces support completely different actions than we normally would think of when moving through real space, out of play. This paper therefore discusses the relationship between selected game rules and game spaces in connection to the behaviors, or possible behaviors, of the player. Space will be seen as a modifier or catalyst of player behavior. Six categories of game space are covered: Joy of movement, exploration, tactical, social, performative, and creative spaces. Joy of movement is examined in detail, with a briefer explanation of the other categories.
Clumping in Galactic WN stars : a comparison of mass loss rates from UV/optical & radio diagnostics
(2007)
The mass loss rates and other parameters for a large sample of Galactic WN stars have been revised by Hamann et al. (2006), using the most up-to date Potsdam Wolf-Rayet (PoWR) model atmospheres. For a sub-sample of these stars exist measurements of their radio free-free emission. After harmonizing the adopted distance and terminal wind velocities, we compare the mass loss rates obtained from the two diagnostics. The differences are discussed as a possible consequence of different clumping contrast in the line-forming and radio-emitting regions.
This text compares the special characteristics of the game space in computer-generated environments with that in non-computerized playing-situations. Herewith, the concept of the magic circle as a deliberately delineated playing sphere with specific rules to be upheld by the players, is challenged. Yet, computer games also provide a virtual playing environment containing the rules of the game as well as the various action possibilities. But both the hardware and software facilitate the player’s actions rather than constraining them. This makes computer games fundamentally different: in contrast to traditional game spaces or limits, the computer-generated environment does not rely on the awareness of the player in upholding these rules. – Thus, there is no magic circle.
Landscape aesthetics drawing on philosophy and psychology allow us to understand computer games from a new angle. The landscapes of computer games can be understood as environments or images. This difference creates two options: 1. We experience environments or images, or 2. We experience landscape simultaneously as both. Psychologically, the first option can be backed up by a Vygotskian framework (this option highlights certain non-mainstream subject positions), the second by a Piegatian (highlighting cognitive mapping of game worlds).
We present XMM-Newton Reflection Grating Spectrometer observations of pairs of X-ray emission line profiles from the O star ζ Pup that originate from the same He-like ion. The two profiles in each pair have different shapes and cannot both be consistently fit by models assuming the same wind parameters. We show that the differences in profile shape can be accounted for in a model including the effects of resonance scattering, which affects the resonance line in the pair but not the intercombination line. This implies that resonance scattering is also important in single resonance lines, where its effect is difficult to distinguish from a low effective continuum optical depth in the wind. Thus, resonance scattering may help reconcile X-ray line profile shapes with literature mass-loss rates.
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
Mass accretion onto compact objects through accretion disks is a common phenomenon in the universe. It is seen in all energy domains from active galactic nuclei through cataclysmic variables (CVs) to young stellar objects. Because CVs are fairly easy to observe, they provide an ideal opportunity to study accretion disks in great detail and thus help us to understand accretion also in other energy ranges. Mass accretion in these objects is often accompanied by mass outflow from the disks. This accretion disk wind, at least in CVs, is thought to be radiatively driven, similar to O star winds. WOMPAT, a 3-D Monte Carlo radiative transfer code for accretion disk winds of CVs is presented.
HPI Future SOC Lab
(2015)
Das Future SOC Lab am HPI ist eine Kooperation des Hasso-Plattner-Instituts mit verschiedenen Industriepartnern. Seine Aufgabe ist die Ermöglichung und Förderung des Austausches zwischen Forschungsgemeinschaft und Industrie.
Am Lab wird interessierten Wissenschaftlern eine Infrastruktur von neuester Hard- und Software kostenfrei für Forschungszwecke zur Verfügung gestellt. Dazu zählen teilweise noch nicht am Markt verfügbare Technologien, die im normalen Hochschulbereich in der Regel nicht zu finanzieren wären, bspw. Server mit bis zu 64 Cores und 2 TB Hauptspeicher. Diese Angebote richten sich insbesondere an Wissenschaftler in den Gebieten Informatik und Wirtschaftsinformatik. Einige der Schwerpunkte sind Cloud Computing, Parallelisierung und In-Memory Technologien.
In diesem Technischen Bericht werden die Ergebnisse der Forschungsprojekte des Jahres 2015 vorgestellt. Ausgewählte Projekte stellten ihre Ergebnisse am 15. April 2015 und 4. November 2015 im Rahmen der Future SOC Lab Tag Veranstaltungen vor.
The rigorous development, application and validation of distributed hydrological models obligates to evaluate data in a spatially distributed way. In particular, spatial model predictions such as the distribution of soil moisture, runoff generating areas or nutrient-contributing areas or erosion rates, are to be assessed against spatially distributed observations. Also model inputs, such as the distribution of modelling units derived by GIS and remote sensing analyses, should be evaluated against groundbased observations of landscape characteristics. So far, however, quantitative methods of spatial field comparison have rarely been used in hydrology. In this paper, we present algorithms that allow to compare observed and simulated spatial hydrological data. The methods can be applied for binary and categorical data on regular grids. They comprise cell-by-cell algorithms, cell-neighbourhood approaches that account for fuzziness of location, and multi-scale algorithms that evaluate the similarity of spatial fields with changing resolution. All methods provide a quantitative measure of the similarity of two maps. The comparison methods are applied in two mountainous catchments in southern Germany (Brugga, 40 km<sup>2) and Austria (Löhnersbach, 16 km<sup>2). As an example of binary hydrological data, the distribution of saturated areas is analyzed in both catchments. For categorical data, vegetation zones that are associated with different runoff generation mechanisms are analyzed in the Löhnersbach. Mapped spatial patterns are compared to simulated patterns from terrain index calculations and from satellite image analysis. It is discussed how particular features of visual similarity between the spatial fields are captured by the quantitative measures, leading to recommendations on suitable algorithms in the context of evaluating distributed hydrological models.
We study the influence of clumping on the predicted wind structure of O-type stars. For this purpose we artificially include clumping into our stationary wind models. When the clumps are assumed to be optically thin, the radiative line force increases compared to corresponding unclumped models, with a similar effect on either the mass-loss rate or the terminal velocity (depending on the onset of clumping). Optically thick clumps, alternatively, might be able to decrease the radiative force.
The influence of the wind to the total continuum of OB supergiants is discussed. For wind velocity distributions with β > 1.0, the wind can have strong influence to the total continuum emission, even at optical wavelengths. Comparing the continuum emission of clumped and unclumped winds, especially for stars with high β values, delivers flux differences of up to 30% with maximum in the near-IR. Continuum observations at these wavelengths are therefore an ideal tool to discriminate between clumped and unclumped winds of OB supergiants.
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
Developmental Gains in Physical Fitness Components of Keyage and Older-than-Keyage Third-Graders
(2022)
Children who were enrolled according to legal enrollment dates (i.e., keyage third-graders aged eight to nine years) exhibit a positive linear physical fitness development (Fühner et al., 2021). However, children who were enrolled with a delay of one year or who repeated a grade (i.e., older-than-keyage children [OTK] aged nine to ten years in third grade) appear to exhibit a poorer physical fitness relative to what could be expected given their chronological age (Fühner et al., 2022). However, because Fühner et al. (2022) compared the performance of OTK children to predicted test scores that were extrapolated based on the data of keyage children, the observed physical fitness of these children could either indicate a delayed physical-fitness development or some physiological or psychological changes occurring during the tenth year of life. We investigate four hypotheses about this effect. (H1) OTK children are biologically younger than keyage children. A formula transforming OTK’s chronological age into a proxy for their biological age brings some of the observed cross-sectional age-related development in line with the predicted age-related development based on the data of keyage children, but large negative group differences remain. Hypotheses 2 to 4 were tested with a longitudinal assessment. (H2) Physiological changes due to biological maturation or psychological factors cause a stagnation of physical fitness development in the tenth year of life. H2 predicts a decline of performance from third to fourth grade also for keyage children. (H3) OTK children exhibit an age-related (temporary) developmental delay in the tenth year of life, but later catch up to the performance of age-matched keyage children. H3 predicts a larger developmental gain for OTK than for keyage children from third to fourth grade. (H4) OTK children exhibit a sustained physical fitness deficit and do not catch up over time. H4 predicts a positive development for keyage and OTK children, with no greater development for OTK compared to keyage children. The longitudinal study was based on a subset of children from the EMOTIKON project (www.uni-potsdam.de/emotikon). The physical fitness (cardiorespiratory endurance [6-minute-run test], coordination [star-run test], speed [20-m sprint test], lower [standing long jump test] and upper [ball push test] limbs muscle power, and balance [one-legged stance test]) of 1,274 children (1,030 keyage and 244 OTK children) from 32 different schools was tested in third grade and retested one year later in fourth grade. Results: (a) Both keyage and OTK children exhibit a positive longitudinal development from third to fourth grade in all six physical fitness components. (b) There is no evidence for a different longitudinal development of keyage and OTK children. (c) Keyage children (approximately 9.5 years in fourth grade) outperform age-matched OTK children (approximately 9.5 years in third grade) in all six physical fitness components. The results show that the physical fitness of OTK children is indeed impaired and are in support of a sustained difference in physical fitness between the groups of keyage and OTK children (H4).
In this paper, doubling in Russian Sign Language and Sign Language of the Netherlands is discussed. In both sign languages different constituents (including verbs, nouns, adjectives, adverbs, and whole clauses) can be doubled. It is shown that doubling in both languages has common functions and exhibits a similar structure, despite some differences. On this basis, a unified pragmatic explanation for many doubling phenomena on both the discourse and the clause-internal levels is provided, namely that the main function of doubling both in RSL and NGT is foregrounding of the doubled information.
We model the line profile variability (lpv) in spectra of clumped stellar atmospheres using the Stochastic Clump Model (SCM) of the winds of early-type stars. In this model the formation of dense inhomogeneities (clumps) in the line driven winds is considered as being a stochastic process. It is supposed that the emission due to clumps mainly contributes to the intensities of emission lines in the stellar spectra. It is shown that in the framework of the SCM it is possible to reproduce both the mean line profiles and a common pattern of the lpv.