Refine
Has Fulltext
- yes (137) (remove)
Year of publication
Document Type
- Conference Proceeding (137) (remove)
Keywords
- Archiv (4)
- Nachlass (4)
- Informatikdidaktik (2)
- Universitätsarchiv (2)
- Assessment (1)
- Bildung (1)
- Computer Science Education (1)
- Constraint Solving (1)
- Deduction (1)
- Didaktische Konzepte (1)
Institute
- Extern (137) (remove)
Nachlässe sind persönliches Eigentum und unterliegen deshalb keiner Abgabepflicht. Der Wunsch des Nachlassers bezüglich der weiteren Aufbewahrung seines schriftlichen Erbes ist demzufolge primär gegenüber allen unseren Wünschen. Wir können nicht fordern, sondern nur bitten, uns durch eigene Leistungen anbieten und die zukünftigen Nachlassenden oder deren Erben überzeugen. Die nicht vorhandene institutionelle Zuständigkeit für die Übernahme von Nachlässen erzeugt die Reibungspunkte zwischen den Einrichtungen, die sich um den Erwerb von Nachlässen bemühen: Archive – Bibliotheken – Museen - Sammlungen. Die Wünsche zum Erwerb des Nachlasses einer bestimmten Person – egal ob Wissenschaftler, Künstler oder Politiker – sind demzufolge immer an verschiedenen Orten gleichzeitig vorhanden. Der Zufall entscheidet dann leider meist darüber, an welcher Stelle der Nachlass zukünftig verwahrt und wissenschaftlich genutzt wird. Es stellt sich die Frage, ob wir auf solche Zufälle hoffen und warten sollen, oder ob wir nicht eher eine engagierte – gemeinsam zwischen den Archiven abgestimmte - Erwerbspolitik betrieben sollten. ------------ Beiträge zum Thema "Nachlässe an Universitäts- und Hochschularchiven sowie Archiven wissenschaftlicher Institutionen" im Rahmen der Frühjahrstagung der Fachgruppe 8: "Archivare an Hochschularchiven und Archiven wissenschaftlicher Institutionen" am 16./17. Juni an der Universität Potsdam.
We introduce a simple approach extending the input language of Answer Set Programming (ASP) systems by multi-valued propositions. Our approach is implemented as a (prototypical) preprocessor translating logic programs with multi-valued propositions into logic programs with Boolean propositions only. Our translation is modular and heavily benefits from the expressive input language of ASP. The resulting approach, along with its implementation, allows for solving interesting constraint satisfaction problems in ASP, showing a good performance.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.
Luminous Blue Variables show strong changes in their stellar wind on time scales of typically years to decades when they expand and contract radially at approximately constant luminosity. Micro-variability on shorter time scales and amplitudes can be observed superimposed to the larger scale radial changes. I will show long-term time series of high resolution spectra which we have collected in the past 20 years for many of the well known LBVs together with a few time series of weekly sampling (HR Car, R40, R71, R110, R127, S Dor) covering a time windows of up to a few months. Wind variability is seen on short and intermediate time scales with the line profiles changing from P Cygni to inverse P Cygni and double peeked profiles sometimes for the same star and spectral line. On longer time scales the ionisation levels for all chemical elements change drastically due to the strong change of the temperature on the stellar surface. While on the long term the characteristic radial changes may have impact on the over all mass loss rates, the variabilities and asymmetries on short and intermediate time scales may cause false estimates of the mass loss rates when confronting models with the observed line profiles
The most massive stars are those with the shortest but most active life. One group of massive stars, the Luminous Blue Variables (LBVs), of which only a few objects are known, are in particular of interest concerning the stability of stars. They have a high mass loss rate and are close to being instable. This is even more likely as rotation becomes an important factor in stellar evolution of these stars. Through massive stellar winds and sometimes giant eruptions, LBV nebulae are formed. Various aspects in the evolution in the LBV phase lead, beside the large scale morphological and kinematical differences, to a diversity of small structures like clumps, rims, and outflows in these nebulae.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
The influence of the wind to the total continuum of OB supergiants is discussed. For wind velocity distributions with β > 1.0, the wind can have strong influence to the total continuum emission, even at optical wavelengths. Comparing the continuum emission of clumped and unclumped winds, especially for stars with high β values, delivers flux differences of up to 30% with maximum in the near-IR. Continuum observations at these wavelengths are therefore an ideal tool to discriminate between clumped and unclumped winds of OB supergiants.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
The H.E.S.S. collaboration recently reported the discovery of VHE γ-ray emission coincident with the young stellar cluster Westerlund 2. This system is known to host a population of hot, massive stars, and, most particularly, the WR binary WR 20a. Particle acceleration to TeV energies in Westerlund 2 can be accomplished in several alternative scenarios, therefore we only discuss energetic constraints based on the total available kinetic energy in the system, the actual mass loss rates of respective cluster members, and implied gamma-ray production from processes such as inverse Compton scattering or neutral pion decay. From the inferred gammaray luminosity of the order of 1035erg/s, implications for the efficiency of converting available kinetic energy into non-thermal radiation associated with stellar winds in the Westerlund 2 cluster are discussed under consideration of either the presence or absence of wind clumping.
Verfassungsgerichtsbarkeit in der Russischen Föderation und in der Bundesrepublik Deutschland
(2013)
Der Tagungsband enthält die Referate und Diskussionsbeiträge des in Moskau an der Staatlichen Juristischen Kutafin-Universität am 9. und 10. Oktober 2012 durchgeführten Rundtischgespräches zur Verfassungsgerichtsbarkeit. Behandelt werden ausgewählte rechtshistorische und -politische Fragen sowie aktuelle rechtliche Probleme der Verfassungsgerichtsbarkeit in der Russischen Föderation und der Bundesrepublik Deutschland sowohl aus der Sicht der Rechtspraxis als auch der Wissenschaft: insbesondere die Entwicklung der Verfassungsgerichtsbarkeit in Geschichte und Gegenwart, Status, Rechtsnatur und Aufgaben des Verfassungsgerichts in den Subjekten der Föderation und in den Ländern sowie Verfassungsgericht und Gesetzgebung. Zudem werden Spezialfragen der Verfassungsgerichtsbarkeit erörtert, z.B. die Institution des Bevollmächtigten Vertreters des Präsidenten im Verfassungsgericht in Russland, der Eilrechtsschutz durch das BVerfG und der Rechtsschutz bei überlangen Verfahren vor dem BVerfG in Deutschland.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Gamma-rays can be produced by the interaction of a relativistic jet and the matter of the stellar wind in the subclass of massive X-ray binaries known as “microquasars”. The relativistic jet is ejected from the surroundings of the compact object and interacts with cold protons from the stellar wind, producing pions that then quickly decay into gamma-rays. Since the resulting gamma-ray emissivity depends on the target density, the detection of rapid variability in microquasars with GLAST and the new generation of Cherenkov imaging arrays could be used to probe the clumped structure of the stellar wind. In particular, we show here that the relative fluctuation in gamma rays may scale with the square root of the ratio of porosity length to binary separation, $\sqrt{h/a}$, implying for example a ca. 10% variation in gamma ray emission for a quite moderate porosity, h/a ∼ 0.01.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
Generalized Two-Level Grammar (GTWOL) provides a new method for compilation of parallel replacement rules into transducers. The current paper identifies the role of generalized lenient composition (GLC) in this method. Thanks to the GLC operation, the compilation method becomes bipartite and easily extendible to capture various application modes. In the light of three notions of obligatoriness, a modification to the compilation method is proposed. We argue that the bipartite design makes implementation of parallel obligatoriness, directionality, length and rank based application modes extremely easy, which is the main result of the paper.
We study the influence of clumping on the predicted wind structure of O-type stars. For this purpose we artificially include clumping into our stationary wind models. When the clumps are assumed to be optically thin, the radiative line force increases compared to corresponding unclumped models, with a similar effect on either the mass-loss rate or the terminal velocity (depending on the onset of clumping). Optically thick clumps, alternatively, might be able to decrease the radiative force.
Mass loss is a very important aspect of the life of massive stars. After briefly reviewing its importance, we discuss the impact of the recently proposed downward revision of mass loss rates due to clumping (difficulty to form Wolf-Rayet stars and production of critically rotating stars). Although a small reduction might be allowed, large reduction factors around ten are disfavoured. We then discuss the possibility of significant mass loss at very low metallicity due to stars reaching break-up velocities and especially due to the metal enrichment of the surface of the star via rotational and convective mixing. This significant mass loss may help the first very massive stars avoid the fate of pair-creation supernova, the chemical signature of which is not observed in extremely metal poor stars. The chemical composition of the very low metallicity winds is very similar to that of the most metal poor star known to date, HE1327-2326 and offer an interesting explanation for the origin of the metals in this star. We also discuss the importance of mass loss in the context of long and soft gamma-ray bursts and pair-creation supernovae. Finally, we would like to stress that mass loss in cooler parts of the HR-diagram (luminous blue variable and yellow and red supergiant stages) are much more uncertain than in the hot part. More work needs to be done in these areas to better constrain the evolution of the most massive stars.
We review the effects of clumping on the profiles of resonance doublets. By allowing the ratio of the doublet oscillator strenghts to be a free parameter, we demonstrate that doublet profiles contain more information than is normally utilized. In clumped (or porous) winds, this ratio can lies between unity and the ratio of the f-values, and can change as a function of velocity and time, depending on the fraction of the stellar disk that is covered by material moving at a particular velocity at a given moment. Using these insights, we present the results of SEI modeling of a sample of B supergiants, ζ Pup and a time series for a star whose terminal velocity is low enough to make the components of its Si VIλλ1400 independent. These results are interpreted within the framewrok of the Oskinova et al. (2007) model, and demonstrate how the doublet profiles can be used to extract infromation about wind structure.
During the last few years there was a tremendous growth of scientific activities in the fields related to both Physics and Control theory: nonlinear dynamics, micro- and nanotechnologies, self-organization and complexity, etc. New horizons were opened and new exciting applications emerged. Experts with different backgrounds starting to work together need more opportunities for information exchange to improve mutual understanding and cooperation. The Conference "Physics and Control 2007" is the third international conference focusing on the borderland between Physics and Control with emphasis on both theory and applications. With its 2007 address at Potsdam, Germany, the conference is located for the first time outside of Russia. The major goal of the Conference is to bring together researchers from different scientific communities and to gain some general and unified perspectives in the studies of controlled systems in physics, engineering, chemistry, biology and other natural sciences. We hope that the Conference helps experts in control theory to get acquainted with new interesting problems, and helps experts in physics and related fields to know more about ideas and tools from the modern control theory.
Temporal propositions are mapped to sets of strings that witness (in a precise sense) the propositions over discrete linear Kripke frames. The strings are collected into regular languages to ensure the decidability of entailments given by inclusions between languages. (Various notions of bounded entailment are shown to be expressible as language inclusions.) The languages unwind computations implicit in the logical (and temporal) connectives via a system of finite-state constraints adapted from finite-state morphology. Applications to Hybrid Logic and non-monotonic inertial reasoning are briefly considered.
This paper presents a system for the detection and correction of syntactic errors. It combines a robust morphosyntactic analyser and two groups of finite-state transducers specified using the Xerox Finite State Tool (xfst). One of the groups is used for the description of syntactic error patterns while the second one is used for the correction of the detected errors. The system has been tested on a corpus of real texts, containing both correct and incorrect sentences, with good results.
This paper describes the key aspects of the system SynCoP (Syntactic Constraint Parser) developed at the Berlin-Brandenburgische Akademie der Wissenschaften. The parser allows to combine syntactic tagging and chunking by means of constraint grammar using weighted finite state transducers (WFST). Chunks are interpreted as local dependency structures within syntactic tagging. The linguistic theories are formulated by criteria which are formalized by a semiring; these criteria allow structural preferences and gradual grammaticality. The parser is essentially a cascade of WFSTs. To find the most likely syntactic readings a best-path search is used.
We exploit time-series $FUSE$ spectroscopy to {\it uniquely} probe spatial structure and clumping in the fast wind of the central star of the H-rich planetary nebula NGC~6543 (HD~164963). Episodic and recurrent optical depth enhancements are discovered in the P{\sc v} absorption troughs, with some evidence for a $\sim$ 0.17-day modulation time-scale. The characteristics of these features are essentially identical to the discrete absorption components' (DACs) commonly seen in the UV lines of massive OB stars, suggesting the temporal structures seen in NGC~6543 likely have a physical origin that is similar to that operating in massive, luminous stars. The mechanism for forming coherent perturbations in the outflows is therefore apparently operating equally in the radiation-pressure-driven winds of widely differing momenta ($\mdot$$v_\infty$$R_\star^{0.5}$) and flow times, as represented by OB stars and CSPN.
The interest in extensions of the logic programming paradigm beyond the class of normal logic programs is motivated by the need of an adequate representation and processing of knowledge. One of the most difficult problems in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the ‘intended’ partial models of generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [Prz91]. The presented preference criterion defines a partial model of a generalized logic program as intended if it is generated by a stationary chain. It turns out that the stationary generated models coincide with the stationary models on the class of normal logic programs. The general wellfounded semantics of such a program is defined as the set-theoretical intersection of its stationary generated models. For normal logic programs the general wellfounded semantics equals the wellfounded semantics.
Since Harris’ parser in the late 50s, multiword units have been progressively integrated in parsers. Nevertheless, in the most part, they are still restricted to compound words, that are more stable and less numerous. Actually, language is full of semi-fixed expressions that also form basic semantic units: semi-fixed adverbial expressions (e.g. time), collocations. Like compounds, the identification of these structures limits the combinatorial complexity induced by lexical ambiguity. In this paper, we detail an experiment that largely integrates these notions in a finite-state procedure of segmentation into super-chunks, preliminary to a parser.We show that the chunker, developped for French, reaches 92.9% precision and 98.7% recall. Moreover, multiword units realize 36.6% of the attachments within nominal and prepositional phrases.
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
The P v λλ1118, 1128 resonance doublet is an extraordinarily useful diagnostic of O-star winds, because it bypasses the traditional problems associated with determining mass-loss rates from UV resonance lines. We discuss critically the assumptions and uncertainties involved with using P v to diagnose mass-loss rates, and conclude that the large discrepancies between massloss rates determined from P v and the rates determined from “density squared” emission processes pose a significant challenge to the “standard model” of hot-star winds. The disparate measurements can be reconciled if the winds of O-type stars are strongly clumped on small spatial scales, which in turn implies that mass-loss rates based on Hα or radio emission are too large by up to an order of magnitude.
We present XMM-Newton Reflection Grating Spectrometer observations of pairs of X-ray emission line profiles from the O star ζ Pup that originate from the same He-like ion. The two profiles in each pair have different shapes and cannot both be consistently fit by models assuming the same wind parameters. We show that the differences in profile shape can be accounted for in a model including the effects of resonance scattering, which affects the resonance line in the pair but not the intercombination line. This implies that resonance scattering is also important in single resonance lines, where its effect is difficult to distinguish from a low effective continuum optical depth in the wind. Thus, resonance scattering may help reconcile X-ray line profile shapes with literature mass-loss rates.
We study the time variability of emission lines in three WNE stars : WR 2 (WN2), WR 3 (WN3ha) and WR152 (WN3). While WR 2 shows no variability above the noise level, the other stars do show variation, which are like other WR stars in WR 152 but very fast in WR 3. From these motions, we deduce a value of β ∼1 for WR 3 that is like that seen in O stars and β ∼2–3 for WR 152, that is intermediate between other WR stars and WR 3.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Mass accretion onto compact objects through accretion disks is a common phenomenon in the universe. It is seen in all energy domains from active galactic nuclei through cataclysmic variables (CVs) to young stellar objects. Because CVs are fairly easy to observe, they provide an ideal opportunity to study accretion disks in great detail and thus help us to understand accretion also in other energy ranges. Mass accretion in these objects is often accompanied by mass outflow from the disks. This accretion disk wind, at least in CVs, is thought to be radiatively driven, similar to O star winds. WOMPAT, a 3-D Monte Carlo radiative transfer code for accretion disk winds of CVs is presented.
By quantitatively fitting simple emission line profile models that include both atomic opacity and porosity to the Chandra X-ray spectrum of ζ Pup, we are able to explore the trade-offs between reduced mass-loss rates and wind porosity. We find that reducing the mass-loss rate of ζ Pup by roughly a factor of four, to 1.5 × 10−6 M⊙ yr−1, enables simple non-porous wind models to provide good fits to the data. If, on the other hand, we take the literature mass-loss rate of 6×10−6 M⊙ yr−1, then to produce X-ray line profiles that fit the data, extreme porosity lengths – of h∞ ≈ 3 R∗ – are required. Moreover, these porous models do not provide better fits to the data than the non-porous, low optical depth models. Additionally, such huge porosity lengths do not seem realistic in light of 2-D numerical simulations of the wind instability.
In ihrem Bemühen, landwirtschaftliche Flächen standortgerecht zu bewirtschaften, sammelt eine zunehmende Anzahl landwirtschaftlicher Betriebe Informationen über die räumlich-zeitliche Verteilung von Boden- und Pflanzenmerkmalen auf ihren Schlägen. Diese Informationen dienen unmittelbar (Echtzeitansatz) oder mittelbar (Kartenansatz) zur Dosierung von Dünge- und Pflanzenschutzmitteln (Präzise Landbewirtschaftung). Zur Datensammlung werden vorrangig fahrzeuggestützte Sensoren und VIS- und NIR-Luftbilder, aufgenommen aus Sportflugzeugen, verwendet. Erste Betriebe erwerben von Dienstleistungsunternehmen aufbereitete Satelliten-Fernerkundungsdaten. Die landwirtschaftliche und agrartechnische Forschung ist bestrebt, die grundlegenden Muster (z.B. des Ertragspotentials) zu erkennen und damit den Aufwand der Betriebe für eine regelmäßige Informationserfassung gering zu halten. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
INTEGRAL tripled the number of super-giant high-mass X-ray binaries (sgHMXB) known in the Galaxy by revealing absorbed and fast transient (SFXT) systems. Quantitative constraints on the wind clumping of massive stars can be obtained from the study of the hard X-ray variability of SFXT. A large fraction of the hard X-ray emission is emitted in the form of flares with a typical duration of 3 ksec, frequency of 7 days and luminosity of $10^{36}$ erg/s. Such flares are most probably emitted by the interaction of a compact object orbiting at $\sim10~R_*$ with wind clumps ($10^{22 ... 23}$ g) representing a large fraction of the stellar mass-loss rate. The density ratio between the clumps and the inter-clump medium is $10^{2 ... 4}$. The parameters of the clumps and of the inter-clump medium, derived from the SFXT flaring behavior, are in good agreement with macro-clumping scenario and line-driven instability simulations. SFXT are likely to have larger orbital radius than classical sgHMXB.
Preface
(2010)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, , interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
In the last years, statistical machine translation has already demonstrated its usefulness within a wide variety of translation applications. In this line, phrase-based alignment models have become the reference to follow in order to build competitive systems. Finite state models are always an interesting framework because there are well-known efficient algorithms for their representation and manipulation. This document is a contribution to the evolution of finite state models towards a phrase-based approach. The inference of stochastic transducers that are based on bilingual phrases is carefully analysed from a finite state point of view. Indeed, the algorithmic phenomena that have to be taken into account in order to deal with such phrase-based finite state models when in decoding time are also in-depth detailed.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
We present an algorithm that computes a function that assigns consecutive integers to trees recognized by a deterministic, acyclic, finite-state, bottom-up tree automaton. Such function is called minimal perfect hashing. It can be used to identify trees recognized by the automaton. Its value may be seen as an index in some other data structures. We also present an algorithm for inverted hashing.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Overwhelming observational and theoretical evidence suggests that the winds of massive stars are highly clumped. We briefly discuss the influence of clumping on model diagnostics and the difficulties of allowing for the influence of clumping on model spectra. Because of its simplicity, and because of computational ease, most spectroscopic analyses incorporate clumping using the volume filling factor. The biases introduced by this approach are uncertain. To investigate alternative clumping models, and to help determine the validity of parameters derived using the volume filling factor method, we discuss results derived using an alternative model in which we assume that the wind is composed of optically thick shells.
Morphological analyses based on word syntax approaches can encounter difficulties with long distance dependencies. The reason is that in some cases an affix has to have access to the inner structure of the form with which it combines. One solution is the percolation of features from ther inner morphemes to the outer morphemes with some process of feature unification. However, the obstacle of percolation constraints or stipulated features has lead some linguists to argue in favour of other frameworks such as, e.g., realizational morphology or parallel approaches like optimality theory. This paper proposes a linguistic analysis of two long distance dependencies in the morphology of Russian verbs, namely secondary imperfectivization and deverbal nominalization.We show how these processes can be reanalysed as local dependencies. Although finitestate frameworks are not bound by such linguistically motivated considerations, we present an implementation of our analysis as proposed in [1] that does not complicate the grammar or enlarge the network unproportionally.