Extern
Refine
Has Fulltext
- yes (257) (remove)
Year of publication
Document Type
- Conference Proceeding (118)
- Article (70)
- Postprint (52)
- Review (6)
- Working Paper (6)
- Doctoral Thesis (4)
- Monograph/Edited Volume (1)
Language
- English (257) (remove)
Is part of the Bibliography
- no (257) (remove)
Keywords
- USA (7)
- United States (7)
- moderne jüdische Geschichte (6)
- modern Jewish history (5)
- 20. Jahrhundert (4)
- 20th century (4)
- 19. Jahrhundert (3)
- Diversity (3)
- 19th century (2)
- Fluoreszenz-Resonanz-Energie-Transfer (2)
Institute
- Extern (257)
- Vereinigung für Jüdische Studien e. V. (23)
- Department Psychologie (14)
- Department Linguistik (10)
- Institut für Chemie (9)
- Institut für Umweltwissenschaften und Geographie (7)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (7)
- Institut für Biochemie und Biologie (6)
- Institut für Geowissenschaften (6)
- Institut für Physik und Astronomie (6)
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.
The interest in extensions of the logic programming paradigm beyond the class of normal logic programs is motivated by the need of an adequate representation and processing of knowledge. One of the most difficult problems in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the ‘intended’ partial models of generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [Prz91]. The presented preference criterion defines a partial model of a generalized logic program as intended if it is generated by a stationary chain. It turns out that the stationary generated models coincide with the stationary models on the class of normal logic programs. The general wellfounded semantics of such a program is defined as the set-theoretical intersection of its stationary generated models. For normal logic programs the general wellfounded semantics equals the wellfounded semantics.
We propose a paraconsistent declarative semantics of possibly inconsistent generalized logic programs which allows for arbitrary formulas in the body and in the head of a rule (i.e. does not depend on the presence of any specific connective, such as negation(-as-failure), nor on any specific syntax of rules). For consistent generalized logic programs this semantics coincides with the stable generated models introduced in [HW97], and for normal logic programs it yields the stable models in the sense of [GL88].
We present the tool Kato which is, to the best of our knowledge, the first tool for plagiarism detection that is directly tailored for answer-set programming (ASP). Kato aims at finding similarities between (segments of) logic programs to help detecting cases of plagiarism. Currently, the tool is realised for DLV programs but it is designed to handle various logic-programming syntax versions. We review basic features and the underlying methodology of the tool.
In this paper we consider a simple syntactic extension of Answer Set Programming (ASP) for dealing with (nested) existential quantifiers and double negation in the rule bodies, in a close way to the recent proposal RASPL-1. The semantics for this extension just resorts to Equilibrium Logic (or, equivalently, to the General Theory of Stable Models), which provides a logic-programming interpretation for any arbitrary theory in the syntax of Predicate Calculus. We present a translation of this syntactic class into standard logic programs with variables (either disjunctive or normal, depending on the input rule heads), as those allowed by current ASP solvers. The translation relies on the introduction of auxiliary predicates and the main result shows that it preserves strong equivalence modulo the original signature.
We introduce a simple approach extending the input language of Answer Set Programming (ASP) systems by multi-valued propositions. Our approach is implemented as a (prototypical) preprocessor translating logic programs with multi-valued propositions into logic programs with Boolean propositions only. Our translation is modular and heavily benefits from the expressive input language of ASP. The resulting approach, along with its implementation, allows for solving interesting constraint satisfaction problems in ASP, showing a good performance.
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
We describe a framework to support the implementation of web-based systems to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore and most important, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Preface
(2010)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, , interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
In this paper, we present a finite-state approach to constituency and therewith an analysis of coordination phenomena involving so-called non-constituents. We show that non-constituents can be seen as parts of fully-fledged constituents and therefore be coordinated in the same way. We have implemented an algorithm based on finite state automata that generates an LFG grammar assigning valid analyses to non-constituent coordination structures in the German language.
Generalized Two-Level Grammar (GTWOL) provides a new method for compilation of parallel replacement rules into transducers. The current paper identifies the role of generalized lenient composition (GLC) in this method. Thanks to the GLC operation, the compilation method becomes bipartite and easily extendible to capture various application modes. In the light of three notions of obligatoriness, a modification to the compilation method is proposed. We argue that the bipartite design makes implementation of parallel obligatoriness, directionality, length and rank based application modes extremely easy, which is the main result of the paper.
Morphological analyses based on word syntax approaches can encounter difficulties with long distance dependencies. The reason is that in some cases an affix has to have access to the inner structure of the form with which it combines. One solution is the percolation of features from ther inner morphemes to the outer morphemes with some process of feature unification. However, the obstacle of percolation constraints or stipulated features has lead some linguists to argue in favour of other frameworks such as, e.g., realizational morphology or parallel approaches like optimality theory. This paper proposes a linguistic analysis of two long distance dependencies in the morphology of Russian verbs, namely secondary imperfectivization and deverbal nominalization.We show how these processes can be reanalysed as local dependencies. Although finitestate frameworks are not bound by such linguistically motivated considerations, we present an implementation of our analysis as proposed in [1] that does not complicate the grammar or enlarge the network unproportionally.
The emergence of information extraction (IE) oriented pattern engines has been observed during the last decade. Most of them exploit heavily finite-state devices. This paper introduces ExPRESS – a new extraction pattern engine, whose rules are regular expressions over flat feature structures. The underlying pattern language is a blend of two previously introduced IE oriented pattern formalisms, namely, JAPE, used in the widely known GATE system, and the unificationbased XTDL formalism used in SProUT. A brief and technical overview of ExPRESS, its pattern language and the pool of its native linguistic components is given. Furthermore, the implementation of the grammar interpreter is addressed too.
In this work an extension of CSSR algorithm using Maximum Entropy Models is introduced. Preliminary experiments to perform Named Entity Recognition with this new system are presented.
In the last years, statistical machine translation has already demonstrated its usefulness within a wide variety of translation applications. In this line, phrase-based alignment models have become the reference to follow in order to build competitive systems. Finite state models are always an interesting framework because there are well-known efficient algorithms for their representation and manipulation. This document is a contribution to the evolution of finite state models towards a phrase-based approach. The inference of stochastic transducers that are based on bilingual phrases is carefully analysed from a finite state point of view. Indeed, the algorithmic phenomena that have to be taken into account in order to deal with such phrase-based finite state models when in decoding time are also in-depth detailed.
Temporal propositions are mapped to sets of strings that witness (in a precise sense) the propositions over discrete linear Kripke frames. The strings are collected into regular languages to ensure the decidability of entailments given by inclusions between languages. (Various notions of bounded entailment are shown to be expressible as language inclusions.) The languages unwind computations implicit in the logical (and temporal) connectives via a system of finite-state constraints adapted from finite-state morphology. Applications to Hybrid Logic and non-monotonic inertial reasoning are briefly considered.
This paper presents a system for the detection and correction of syntactic errors. It combines a robust morphosyntactic analyser and two groups of finite-state transducers specified using the Xerox Finite State Tool (xfst). One of the groups is used for the description of syntactic error patterns while the second one is used for the correction of the detected errors. The system has been tested on a corpus of real texts, containing both correct and incorrect sentences, with good results.
This paper describes the key aspects of the system SynCoP (Syntactic Constraint Parser) developed at the Berlin-Brandenburgische Akademie der Wissenschaften. The parser allows to combine syntactic tagging and chunking by means of constraint grammar using weighted finite state transducers (WFST). Chunks are interpreted as local dependency structures within syntactic tagging. The linguistic theories are formulated by criteria which are formalized by a semiring; these criteria allow structural preferences and gradual grammaticality. The parser is essentially a cascade of WFSTs. To find the most likely syntactic readings a best-path search is used.
We present an algorithm that computes a function that assigns consecutive integers to trees recognized by a deterministic, acyclic, finite-state, bottom-up tree automaton. Such function is called minimal perfect hashing. It can be used to identify trees recognized by the automaton. Its value may be seen as an index in some other data structures. We also present an algorithm for inverted hashing.
We introduce and discuss a number of issues that arise in the process of building a finite-state morphological analyzer for Urdu, in particular issues with potential ambiguity and non-concatenative morphology. Our approach allows for an underlyingly similar treatment of both Urdu and Hindi via a cascade of finite-state transducers that transliterates the very different scripts into a common ASCII transcription system. As this transliteration system is based on the XFST tools that the Urdu/Hindi common morphological analyzer is also implemented in, no compatibility problems arise.
Finite state methods for natural language processing often require the construction and the intersection of several automata. In this paper, we investigate the question of determining the best order in which these intersections should be performed. We take as an example lexical disambiguation in polarity grammars. We show that there is no efficient way to minimize the state complexity of these intersections.
Since Harris’ parser in the late 50s, multiword units have been progressively integrated in parsers. Nevertheless, in the most part, they are still restricted to compound words, that are more stable and less numerous. Actually, language is full of semi-fixed expressions that also form basic semantic units: semi-fixed adverbial expressions (e.g. time), collocations. Like compounds, the identification of these structures limits the combinatorial complexity induced by lexical ambiguity. In this paper, we detail an experiment that largely integrates these notions in a finite-state procedure of segmentation into super-chunks, preliminary to a parser.We show that the chunker, developped for French, reaches 92.9% precision and 98.7% recall. Moreover, multiword units realize 36.6% of the attachments within nominal and prepositional phrases.
This paper describes a two-level formalism where feature structures are used in contextual rules. Whereas usual two-level grammars describe rational sets over symbol pairs, this new formalism uses tree structured regular expressions. They allow an explicit and precise definition of the scope of feature structures. A given surface form may be described using several feature structures. Feature unification is expressed in contextual rules using variables, like in a unification grammar. Grammars are compiled in finite state multi-tape transducers.
This article describes a HMM-based word-alignment method that can selectively enforce a contiguity constraint. This method has a direct application in the extraction of a bilingual terminological lexicon from a parallel corpus, but can also be used as a preliminary step for the extraction of phrase pairs in a Phrase-Based Statistical Machine Translation system. Contiguous source words composing terms are aligned to contiguous target language words. The HMM is transformed into a Weighted Finite State Transducer (WFST) and contiguity constraints are enforced by specific multi-tape WFSTs. The proposed method is especially suited when basic linguistic resources (morphological analyzer, part-of-speech taggers and term extractors) are available for the source language only.
Nested complementation plays an important role in expressing counter- i.e. star-free and first-order definable languages and their hierarchies. In addition, methods that compile phonological rules into finite-state networks use double-nested complementation or “double negation”. This paper reviews how the double-nested complementation extends to a relatively new operation, generalized restriction (GR), coined by the author (Yli-Jyrä and Koskenniemi 2004). This operation encapsulates a double-nested complementation and elimination of a concatenation marker, diamond, whose finite occurrences align concatenations in the arguments of the operation. The paper demonstrates that the GR operation has an interesting potential in expressing regular languages, various kinds of grammars, bimorphisms and relations. This motivates a further study of optimized implementation of the operator.
The most recent trend in the studies of LF intervention effects makes crucial reference to focusing effects on the interveners, and this paper critically examines the representative analyses of the focus-based approach. While each analysis has its own merits and shortcomings, I argue that a pragmatic analysis that does not make appeal to syntactic configurations is better equipped to deal with many of the complex and delicate facts surrounding intervention effects.
When we pay close attention to the prosody of Wh-questions in Japanese, we discover many novel and interesting empirical puzzles that would require us to devise a much finer syntactic component of grammar. This paper addresses the issues that pose some problems to such an elaborated grammar, and offers solutions, making an appeal to the information structure and sentence processing involved in the interpretation of interrogative and focus constructions.
This paper discusses how focus changes prosodic structure in Tokyo Japanese. It is generally believed that focus blocks the intonational process of downstep and causes a pitch reset. This paper presents experimental evidence against this traditional view by looking at the prosodic behavior of Wh words, which receive focus lexically in Japanese as in other languages. It is demonstrated, specifically, that the focused Wh element does not block downstep although it receives a much higher pitch than its preceding element. This suggests that presence of lexical focus does not trigger pitch reset in Japanese.
The end of culture?
(2000)
Local Orders, Global Chaos
(1999)
The requirements of modern e-learning techniques change. Aspects such as community interaction, flexibility, pervasive learning and increasing mobility in communication habits become more important. To meet these challenges e-learning platforms must provide support on mobile learning. Most approaches try to adopt centralised and static e-learning mechanisms to mobile devices. However, often technically it is not possible for all kinds of devices to be connected to a central server. Therefore we introduce an application of a mobile e-learning network which operates totally decentralised with the help of an underlying ad hoc network architecture. Furthermore the concept of ad hoc messaging network (AMNET) is used as basis system architecture for our approach to implement a platform for pervasive mobile e-learning.
Monolayers of rod-shaped and disc-shaped liquid crystalline compounds at the air-water interface
(1986)
Calamitic (rod-shaped) and discotic (disc-shaped) thermotropic liquid crystalline (LC) compounds were spread at the air-water interface, and their ability to form monolayers was studied. The calamitic LCs investigated were found to form monolayers which behave analogously to conventional amphiphiles such as fatty acids. The spreading of the discotic LCs produced monolayers as well, but with a behaviour different from classical amphiphiles. The areas occupied per molecule are too small to allow the contact of all hydrophilic groups with the water surface and the packing of all hydrophobic chains. Various molecular arrangements of the discotics at the water surface to fit the spreading data are discussed.
Cinnamic acid moieties were incorporated into amphiphilic compounds containing one and two alkyl chains. These lipid-like compounds with photoreactive units undergo self-organization to form monolayers at the gas-water interface and bilayer structures (vesicles) in aqueous solutions. The photoreaction of the cinnamic acid moiety induced by 254 nm UV light was investigated in the crystalline state, in monolayers, in vesicles and in solution in organic solvents. The single-chain amphiphiles undergo dimerization to yield photoproducts with twice the molecular weight of the corresponding monomers in organized systems. The photoreaction of amphiphiles containing two cinnamic acid groups occurs via two mechanisms: The intramolecular dimerization produces bicycles, with retention of the molecular weight of the corresponding monomer. The intermolecular reaction leads to oligomeric and polymeric photoproducts. In contrast to the single-chain amphiphiles, photodimerization processes of lipoids containing two cinnamic acid moieties also occur in solution in organic solvents.
The spectral efficiency of blackness induction was measured in three normal trichromatic observers and in one deuteranomalous observer. The psychophysical task was to adjust the radiance of a monochromatic 60–120′ annulus until a 45′ central broadband field just turned black and its contour became indiscriminable from a dark surrounding gap that separated it from the annulus. The reciprocal of the radiance required to induce blackness with annulus wavelengths between 420 and 680 nm was used to define a spectral-efficiency function for the blackness component of the achromatic process. For each observer, the shape of this blackness-sensitivity function agreed with the spectral-efficiency function based on heterochromatic flicker photometry when measured with the same 60–120′ annulus. Both of these functions matched the Commission Internationale de l'Eclairage Vλ function except at short wavelengths. Ancillary measurements showed that the latter difference in sensitivity can be ascribed to nonuniformities of preretinal absorption, since the annular field excluded the central 60′ of the fovea. Thus our evidence indicates that, at least to a good first approximation, induced blackness is inversely related to the spectral-luminosity function. These findings are consistent with a model that separates the achromatic and the chromatic pathways.
The development of phonetic codes in memory of 141 pairs of normal and disabled readers from 7.8 to 16.8 years of age was tested with a task adapted from L. S. Mark, D. Shankweiler, I. Y. Liberman, and C. A. Fowler (Memory & Cognition, 1977, 5, 623–629) that measured false-positive errors in recognition memory for foil words which rhymed with words in the memory list versus foil words that did not rhyme. Our younger subjects replicated Mark et al., showing a larger difference between rhyming and nonrhyming false-positive errors for the normal readers. The older disabled readers' phonetic effect was comparable to that of the younger normal readers, suggesting a developmental lag in their use of phonetic coding in memory. Surprisingly, the normal readers' phonetic effect declined with age in the recognition task, but they maintained a significant advantage across age in the auditory WISC-R digit span recall test, and a test of phonological nonword decoding. The normals' decline with age in rhyming confusion may be due to an increase in the precision of their phonetic codes.
Linguistic and psycholinguistic accounts based on the study of English may prove unreliable as guides to sentence processing in even closely related languages. The present study illustrates this claim in a test of sentence interpretation by German-, Italian-, and English-speaking adults. Subjects were presented with simple transitive sentences in which contrasts of (1) word order, (2) agreement, (3) animacy, and (4) stress were systematically varied. For each sentence, subjects were asked to state which of the two nouns was the actor. The results indicated that Americans relied overwhelming on word order, using a first-noun strategy in NVN and a second-noun strategy in VNN and NNV sentences. Germans relied on both agreement and animacy. Italians showed extreme reliance on agreement cues. In both German and Italian, stress played a role in terms of complex interactions with word order and agreement. The findings were interpreted in terms of the “competition model” of Bates and MacWhinney (in H. Winitz (Ed.), Annals of the New York Academy of Sciences Conference on Native and Foreign Language Acquisition. New York: New York Academy of Sciences, 1982) in which cue validity is considered to be the primary determinant of cue strength. According to this model, cues are said to be high in validity when they are also high in applicability and reliability.
The optical density of human macular pigment was measured for 50 observers ranging in age from 10 to 90 years. The psychophysical method required adjusting the radiance of a 1°, monochromatic light (400–550 nm) to minimize flicker (15 Hz) when presented in counterphase with a 460 nm standard. This test stimulus was presented superimposed on a broad-band, short-wave background. Macular pigment density was determined by comparing sensitivity under these conditions for the fovea, where macular pigment is maximal, and 5° temporally. This difference spectrum, measured for 12 observers, matched Wyszecki and Stiles's standard density spectrum for macular pigment. To study variation in macular pigment density for a larger group of observers, measurements were made at only selected spectral points (460, 500 and 550 nm). The mean optical density at 460 nm for the complete sample of 50 subjects was 0.39. Substantial individual differences in density were found (ca. 0.10–0.80), but this variation was not systematically related to age.
In order to investigate the temporal characteristics of cognitive processing, we apply multivariate phase synchronization analysis to event-related potentials. The experimental design combines a semantic incongruity in a sentence context with a physical mismatch (color change). In the ERP average, these result in an N400 component and a P300-like positivity, respectively. The synchronization analysis shows an effect of global desynchronization in the theta band around 288ms after stimulus presentation for the semantic incongruity, while the physical mismatch elicits an increase of global synchronization in the alpha band around 204ms. Both of these effects clearly precede those in the ERP average. Moreover, the delay between synchronization effect and ERP component correlates with the complexity of the cognitive processes.
Fast analysis of different species of molecules in soils is investigated by capillary electrophoresis (CE). Several CE techniques for the analysis of inorganic ions and carbohydrates have been tested. With regard to the intents of pedologists and the usually large number of soil analyses a bundle of CE systems is proposed, capable of effecting time-saving soil analyses. Adapted electrolyte systems recently published and new separation systems are described. Examples of the application of these methods to two different soil samples are presented.
The haemolymph of the adult Colorado potato beetle, Lepinotarsa decemlineata Say, contains a high molecular weight (MW > 200,000) JH-III specific binding protein. The Kd value of the protein for racemic JH-III is 1.3 ± 0.2 × 10−7 M. It has a lower affinity for racemic JH-I and it does not bind JH-III-diol or JH-III-acid. The binding protein does discriminate between the enantiomers of synthetic, racemic JH-III as was determined by stereochemical anaysis of the bound and the free JH-III. Incubation of racemic JH-III with crude haemolymph results in preferential formation of (10S)-JH-III-acid, the unnatural configuration. The JH-esterase present in L. decemlineata haemolymph is not enantioselective. It is concluded that the most important function of the binding protein is that of a specific carrier, protecting the natural hormone against degradation by esterases. The carrier does not protect JH-I as efficiently as the lower homologue.
The site of confluence of the artery and the portal vein in the liver still appears to be controversial. Anatomical studies suggested a presinusoidal or an intrasinusoidal confluence in the first, second or even final third of the sinusoids. The objective of this investigation was to study the problem with functional biochemical techniques. Rat livers were perfused through the hepatic artery and simultaneously either in the orthograde direction from the portal vein to the hepatic vein or in the retrograde direction from the hepatic vein to the portal vein. Arterial how was linearly dependent on arterial pressure between 70 cm H2O and 120 cm H2O at a constant portal or hepatovenous pressure of 18 cm H2O. An arterial pressure of 100 cm H2O was required for the maintenance of a homogeneous orthograde perfusion of the whole parenchyma and of a physiologic ratio of arterial to portal how of about 1:3. Glucagon was infused either through the artery or the portal vein and hepatic vein, respectively, to a submaximally effective ''calculated'' sinusoidal concentration after mixing of 0.1 nmol/L. During orthograde perfusions, arterial and portal glucagon caused the same increases in glucose output. Yet during retrograde perfusions, hepatovenous glucagon elicited metabolic alterations equal to those in orthograde perfusions, whereas arterial glucagon effected changes strongly reduced to between 10% and 50%. Arterially infused trypan blue was distributed homogeneously in the parenchyma during orthograde perfusions, whereas it reached clearly smaller areas of parenchyma during retrograde perfusions. Finally, arterially applied acridine orange was taken up by all periportal hepatocytes in the proximal half of the acinus during orthograde perfusions but only by a much smaller portion of periportal cells in the proximal third of the acinus during retrograde perfusions. These findings suggest that in rat liver, the hepatic artery and the portal vein mix before and within the first third of the sinusoids, rather than in the middle or even last third.
Small livestock is an important resource for rural human populations in dry climates. How strongly will climate change affect the capacity of the rangeland? We used hierarchical modelling to scale quantitatively the growth of shrubs and annual plants, the main food of sheep and goats, to the landscape extent in the eastern Mediterranean region. Without grazing, productivity increased in a sigmoid way with mean annual precipitation. Grazing reduced productivity more strongly the drier the landscape. At a point just under the stocking capacity of the vegetation, productivity declined precipitously with more intense grazing due to a lack of seed production of annuals. We repeated simulations with precipitation patterns projected by two contrasting IPCC scenarios. Compared to results based on historic patterns, productivity and stocking capacity did not differ in most cases. Thus, grazing intensity remains the stronger impact on landscape productivity in this dry region even in the future.
A multitype Dawson-Watanabe process is conditioned, in subcritical and critical cases, on non-extinction in the remote future. On every finite time interval, its distribution is absolutely continuous with respect to the law of the unconditioned process. A martingale problem characterization is also given. Several results on the long time behavior of the conditioned mass process - the conditioned multitype Feller branching diffusion - are then proved. The general case is first considered, where the mutation matrix which models the interaction between the types, is irreducible. Several two-type models with decomposable mutation matrices are analyzed too .