Extern
Refine
Year of publication
Document Type
- Conference Proceeding (137) (remove)
Keywords
- Archiv (4)
- Nachlass (4)
- Universitätsarchiv (2)
- Bildung (1)
- Computer Science Education (1)
- Constraint Solving (1)
- Deduction (1)
- Fontane, Theodor (1)
- Gesellschaft (1)
- Gesellschaftrecht (1)
Institute
- Extern (137)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (16)
- Institut für Physik und Astronomie (4)
- Institut für Biochemie und Biologie (3)
- Institut für Geowissenschaften (3)
- Institut für Informatik und Computational Science (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Bürgerliches Recht (1)
- Institut für Germanistik (1)
- Theodor-Fontane-Archiv (1)
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
Die Konferenz „International Conference for the 10th Anniversary of the Institute of Comparative Law” hat am 24. Mai 2013 in Szeged stattgefunden. Im Rahmen der viersprachigen Konferenz haben mehr als dreißig Teilnehmer ihre Forschungsergebnisse präsentiert. Der Essay von Zoltán Péteri blickt auf die Disziplin aus der Perspektive der Wissenschaftsgeschichte. Katalin Kelemen und Balázs Fekete gehen in ihrem Aufsatz der Frage nach, welchen Weg die Versuche der Klassifikation der Rechtssysteme von Osteuropa in der späten Phase der Umbrüche der 1980/90er Jahren genommen haben. Die historische Betrachtungsweise mit Bezug auf Rechtsgeschichte und Rechtsvergleichung spiegelt sich auch in anderen Essays wider, vor allem in den Aufsätzen von Szilvia Bató, Magdolna Gedeon und Béla Szabó P. sowie auch in den Aufsätzen von Péter Mezei und Tünde Szűcs. Attila Badó analysiert die Rechtsvergleichung aus der Sicht des Rechts, der Soziologie und der Politikwissenschaft anhand von Untersuchungen über das Sanktionsystem der Richter in den USA. Diese politikwissenschaftliche Seite wird auch in den Aufsätzen über die aktuellen Fragen der europäischen Integration von Carine Guemar und Laureline Congnard betont. Eine Reihe von Aufsätzen behandeln die konventionelle normative Komparatistik auf dem Gebiet des Verfassungsrechts (Jordane Arlettaz und Péter Kruzslicz), Gesellschaftsrechts (Kitti Bakos-Kovács), Urheberrechts (Dóra Hajdú) und Steuerrechts (Judit Jacsó). Daneben bilden eine weitere Gruppe die Aufsätze von János Bóka und Erzsébet Csatlós, die die Verwendung der vergleichenden Methode in der Praxis der Rechtsprechung untersuchen. Die Rechtsvergleichung ist eine sich dynamisch entwickelnde Disziplin. Die Konferenz und dieser Band dienen nicht nur der Würdigung der bisherigen Arbeit des Instituts für Rechtsvergleichung, sondern zeigen gleichzeitig neue Ziele auf. Die wichtigsten Grundsätze bleiben aber fest verankert auch in einem sich stets verändernden rechtlichen und geistigen Umfeld. Das Motto des Instituts lautet „instruere et docere omnes qui edoceri desiderant“ – „alle lehren, die lernen wollen.“ Auch in den folgenden Jahrzehnten werden uns der Wille des Lernens und Lehrens, die Freiheit der Forschung sowie die Übertragung und Weiterentwicklung der ungarischen wie globalen Rechtskultur leiten.
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
This article describes a HMM-based word-alignment method that can selectively enforce a contiguity constraint. This method has a direct application in the extraction of a bilingual terminological lexicon from a parallel corpus, but can also be used as a preliminary step for the extraction of phrase pairs in a Phrase-Based Statistical Machine Translation system. Contiguous source words composing terms are aligned to contiguous target language words. The HMM is transformed into a Weighted Finite State Transducer (WFST) and contiguity constraints are enforced by specific multi-tape WFSTs. The proposed method is especially suited when basic linguistic resources (morphological analyzer, part-of-speech taggers and term extractors) are available for the source language only.
This paper describes a two-level formalism where feature structures are used in contextual rules. Whereas usual two-level grammars describe rational sets over symbol pairs, this new formalism uses tree structured regular expressions. They allow an explicit and precise definition of the scope of feature structures. A given surface form may be described using several feature structures. Feature unification is expressed in contextual rules using variables, like in a unification grammar. Grammars are compiled in finite state multi-tape transducers.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
Since Harris’ parser in the late 50s, multiword units have been progressively integrated in parsers. Nevertheless, in the most part, they are still restricted to compound words, that are more stable and less numerous. Actually, language is full of semi-fixed expressions that also form basic semantic units: semi-fixed adverbial expressions (e.g. time), collocations. Like compounds, the identification of these structures limits the combinatorial complexity induced by lexical ambiguity. In this paper, we detail an experiment that largely integrates these notions in a finite-state procedure of segmentation into super-chunks, preliminary to a parser.We show that the chunker, developped for French, reaches 92.9% precision and 98.7% recall. Moreover, multiword units realize 36.6% of the attachments within nominal and prepositional phrases.
Finite state methods for natural language processing often require the construction and the intersection of several automata. In this paper, we investigate the question of determining the best order in which these intersections should be performed. We take as an example lexical disambiguation in polarity grammars. We show that there is no efficient way to minimize the state complexity of these intersections.
We have analyzed the spectra of seven Galactic O4 supergiants, with the NLTE wind code CMFGEN. For all stars, we have found that clumped wind models match well lines from different species spanning a wavelength range from FUV to optical, and remain consistent with Hα data. We have achieved an excellent match of the P V λλ1118, 1128 resonance doublet and N IV λ1718, as well as He II λ4686 suggesting that our physical description of clumping is adequate. We find very small volume filling factors and that clumping starts deep in the wind, near the sonic point. The most crucial consequence of our analysis is that the mass loss rates of O stars need to be revised downward significantly, by a factor of 3 and more compared to those obtained from smooth-wind models.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
We introduce and discuss a number of issues that arise in the process of building a finite-state morphological analyzer for Urdu, in particular issues with potential ambiguity and non-concatenative morphology. Our approach allows for an underlyingly similar treatment of both Urdu and Hindi via a cascade of finite-state transducers that transliterates the very different scripts into a common ASCII transcription system. As this transliteration system is based on the XFST tools that the Urdu/Hindi common morphological analyzer is also implemented in, no compatibility problems arise.
In this paper we consider a simple syntactic extension of Answer Set Programming (ASP) for dealing with (nested) existential quantifiers and double negation in the rule bodies, in a close way to the recent proposal RASPL-1. The semantics for this extension just resorts to Equilibrium Logic (or, equivalently, to the General Theory of Stable Models), which provides a logic-programming interpretation for any arbitrary theory in the syntax of Predicate Calculus. We present a translation of this syntactic class into standard logic programs with variables (either disjunctive or normal, depending on the input rule heads), as those allowed by current ASP solvers. The translation relies on the introduction of auxiliary predicates and the main result shows that it preserves strong equivalence modulo the original signature.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.