Extern
Refine
Has Fulltext
- yes (254)
Year of publication
Document Type
- Conference Proceeding (118)
- Article (67)
- Postprint (52)
- Review (6)
- Working Paper (6)
- Doctoral Thesis (4)
- Monograph/Edited Volume (1)
Language
- English (254) (remove)
Is part of the Bibliography
- no (254) (remove)
Keywords
- USA (7)
- United States (7)
- moderne jüdische Geschichte (6)
- modern Jewish history (5)
- 20. Jahrhundert (4)
- 20th century (4)
- 19. Jahrhundert (3)
- Diversity (3)
- 19th century (2)
- Fluoreszenz-Resonanz-Energie-Transfer (2)
Institute
- Extern (254)
- Vereinigung für Jüdische Studien e. V. (20)
- Department Psychologie (14)
- Department Linguistik (10)
- Institut für Chemie (9)
- Institut für Umweltwissenschaften und Geographie (7)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (7)
- Institut für Biochemie und Biologie (6)
- Institut für Geowissenschaften (6)
- Institut für Physik und Astronomie (6)
“Israel am Meere”
(2023)
For Jews in Germany, the period following the Nazis’ rise to power in January 1933 was a period of decision-making on many levels: How should they respond to the persecution? If they decided to emigrate, many more decisions had to be made: How does one leave a country, and where should one go? A key moment in the process and in the cultural practice of emigration is the beginning of the sea voyage – when the need for departure and the hope for a new arrival jointly create a period of liminality. Looking at reports from sea voyages of exploration and emigration from the 1930s, this contribution discusses the question whether, and in what ways, such reflections can be read in the context of religious experiences and in the search for Jewish identities in times of turmoil.
“Creating a Maritime Future”
(2023)
This article explores the importance of the port city of Hamburg in the evolving discourses on the creation of a maritime future, a vision which became influential in the 1930s, 1940s and 1950s. While some Jewish representatives in the city aimed at preserving and intertwining Hanseatic and Jewish traditions in order to secure a Jewish presence in the port city under the pressure of the Nazi regime and thereafter, others wanted to create new emigration opportunities, especially to Mandatory Palestine, and create a Jewish maritime future in Eretz Israel. Different Zionist organizations supported the newly evolving maritime ideas, such as the “conquest of the sea”, and promoted the image of a Jewish seafaring nation. Despite the difficulties in the 1940s, these concepts gained influence post-1945 and led to the foundation of the fishery kibbutz “Zerubavel” in Blankenese/Hamburg. However, the idea of a Hanseatic Jewish future also remained influential and illustrates how differently a “Jewish maritime future” was imagined and used to link past, present and future.
We introduce a simple approach extending the input language of Answer Set Programming (ASP) systems by multi-valued propositions. Our approach is implemented as a (prototypical) preprocessor translating logic programs with multi-valued propositions into logic programs with Boolean propositions only. Our translation is modular and heavily benefits from the expressive input language of ASP. The resulting approach, along with its implementation, allows for solving interesting constraint satisfaction problems in ASP, showing a good performance.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.
Diversity is a term that is broadly used and challenging for informatics research, development and education. Diversity concerns may relate to unequal participation, knowledge and methodology, curricula, institutional planning etc. For a lot of these areas, measures, guidelines and best practices on diversity awareness exist. A systemic, sustainable impact of diversity measures on informatics is still largely missing. In this paper I explore what working with diversity and gender concepts in informatics entails, what the main challenges are and provide thoughts for improvement. The paper includes definitions of diversity and intersectionality, reflections on the disciplinary basis of informatics and practical implications of integrating diversity in informatics research and development. In the final part, two concepts from the social sciences and the humanities, the notion of “third space”/hybridity and the notion of “feminist ethics of care”, serve as a lens to foster more sustainable ways of working with diversity in informatics.
Recent research has shown that the early lexical representations children establish in their second year of life already seem to be phonologically detailed enough to allow differentiation from very similar forms. In contrast to these findings children with specific language impairment show problems in discriminating phonologically similar word forms up to school age. In our study we investigated the question whether there would be differences in the processing of phonological details in normally developing and in children with low language performance in the second year of life. This was done by a retrospective study in which in the processing of phonological details was tested by a preferential looking experiment when the children were 19 months old. At the age of 30 months children were tested with a standardized German test of language comprehension and production (SETK2). The preferential looking data at 19 months revealed an opposite reaction pattern for the two groups: while the children scoring normally in the SETK2 increase their fixations of a pictured object only when it was named with the correct word, children with later low language performance did so only when presented with a phonologically slightly deviant mispronunciation. We suggest that this pattern does not point to a specific deficit in processing phonological information in these children but might be related to an instability of early phonological representations, and/or a generalized problem of information processing as compared to typically developing children.
Luminous Blue Variables show strong changes in their stellar wind on time scales of typically years to decades when they expand and contract radially at approximately constant luminosity. Micro-variability on shorter time scales and amplitudes can be observed superimposed to the larger scale radial changes. I will show long-term time series of high resolution spectra which we have collected in the past 20 years for many of the well known LBVs together with a few time series of weekly sampling (HR Car, R40, R71, R110, R127, S Dor) covering a time windows of up to a few months. Wind variability is seen on short and intermediate time scales with the line profiles changing from P Cygni to inverse P Cygni and double peeked profiles sometimes for the same star and spectral line. On longer time scales the ionisation levels for all chemical elements change drastically due to the strong change of the temperature on the stellar surface. While on the long term the characteristic radial changes may have impact on the over all mass loss rates, the variabilities and asymmetries on short and intermediate time scales may cause false estimates of the mass loss rates when confronting models with the observed line profiles
The most massive stars are those with the shortest but most active life. One group of massive stars, the Luminous Blue Variables (LBVs), of which only a few objects are known, are in particular of interest concerning the stability of stars. They have a high mass loss rate and are close to being instable. This is even more likely as rotation becomes an important factor in stellar evolution of these stars. Through massive stellar winds and sometimes giant eruptions, LBV nebulae are formed. Various aspects in the evolution in the LBV phase lead, beside the large scale morphological and kinematical differences, to a diversity of small structures like clumps, rims, and outflows in these nebulae.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
The influence of the wind to the total continuum of OB supergiants is discussed. For wind velocity distributions with β > 1.0, the wind can have strong influence to the total continuum emission, even at optical wavelengths. Comparing the continuum emission of clumped and unclumped winds, especially for stars with high β values, delivers flux differences of up to 30% with maximum in the near-IR. Continuum observations at these wavelengths are therefore an ideal tool to discriminate between clumped and unclumped winds of OB supergiants.
When we pay close attention to the prosody of Wh-questions in Japanese, we discover many novel and interesting empirical puzzles that would require us to devise a much finer syntactic component of grammar. This paper addresses the issues that pose some problems to such an elaborated grammar, and offers solutions, making an appeal to the information structure and sentence processing involved in the interpretation of interrogative and focus constructions.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
The H.E.S.S. collaboration recently reported the discovery of VHE γ-ray emission coincident with the young stellar cluster Westerlund 2. This system is known to host a population of hot, massive stars, and, most particularly, the WR binary WR 20a. Particle acceleration to TeV energies in Westerlund 2 can be accomplished in several alternative scenarios, therefore we only discuss energetic constraints based on the total available kinetic energy in the system, the actual mass loss rates of respective cluster members, and implied gamma-ray production from processes such as inverse Compton scattering or neutral pion decay. From the inferred gammaray luminosity of the order of 1035erg/s, implications for the efficiency of converting available kinetic energy into non-thermal radiation associated with stellar winds in the Westerlund 2 cluster are discussed under consideration of either the presence or absence of wind clumping.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Gamma-rays can be produced by the interaction of a relativistic jet and the matter of the stellar wind in the subclass of massive X-ray binaries known as “microquasars”. The relativistic jet is ejected from the surroundings of the compact object and interacts with cold protons from the stellar wind, producing pions that then quickly decay into gamma-rays. Since the resulting gamma-ray emissivity depends on the target density, the detection of rapid variability in microquasars with GLAST and the new generation of Cherenkov imaging arrays could be used to probe the clumped structure of the stellar wind. In particular, we show here that the relative fluctuation in gamma rays may scale with the square root of the ratio of porosity length to binary separation, $\sqrt{h/a}$, implying for example a ca. 10% variation in gamma ray emission for a quite moderate porosity, h/a ∼ 0.01.
Unity in diversity
(2005)
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
This paper aims to contribute a different approach to transitional justice, one in which political decisions are rocketed to the forefront of the research. Theory asserts that, after a transition to democracy, it is the constituency who defines the direction a country will take. Therefore, pleasing them should be at the fore of the responses taken by those in power. However, reality distances itself from theory. History provides us with many examples of the contrary, which indicates that the politicization of transitional justice is an ever-present event. The first section will outline current definitions and obstacles faced by transitional justice, focusing on the implicit ties between them and the aforementioned politicization. An original categorization of Transitional Justice as a method of analysis will also be introduced, which I denominate Political Opportunism. The case of Argentina, a country that is usually described as a model to export but that after 35 years is still dealing with the consequences brought by the contradictions of using several methods of justice, will then be reinterpreted through this perspective. At the end of the paper, the inevitable question will be posed: can this new angle be exported and implemented in every transition?
At different times and places, civic engagement in nonviolent resistance (NVR) has repeatedly shown to be an effective tool in times of conflict to initiate societal change from below. History teaches us that there have been successes (Mahatma Gandhi in India) and failures (the Tiananmen Square protests in China).
Along with the recognition of the duality between transformative potential and stark consequences, the historical development of NVR was accompanied by the emergence of scholarly debate, fractured along disputes around purpose, character and effectivity of nonviolent actions taken by civil society stakeholders engaged in making their voices heard. One of the field’s current points of interest is the examination of the long-term effects of NVR movements resulting in societal transformation on the stability and adequacy of a subsequently altered or emerging democracy, suggesting that NVR contributes positively to the sustainable and representative design of an egalitarian governing system.
The conclusion of the Nepalese civil war in 2006 should pose as an unambiguous example for the illustration of this phenomenon, but simultaneously raises the question why there was no successful implementation of a transitional process focusing on the needs of the victims.
While the concept of transitional justice and its range of measures have gained importance on an international level to come to terms with major crimes of the past, colonial crimes and mass violence committed by Western actors have not been addressed by transitional justice so far. In this chapter, the Herero’s and Nama’s struggle for justice for the genocide on their ancestors by Germany from 1904 – 1908 and the arising challenges are set in relation to conceptual debates in the field of transitional justice. Building on current debates in the field, suggesting more structural and transformative conceptualizations of transitional justice and an approach ‘from below’, it is argued that decolonial activism of formerly colonized communities and transitional justice debates can inform each other in a dialogic and fruitful form to formulate suggestions for a process towards post-colonial justice.
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
Generalized Two-Level Grammar (GTWOL) provides a new method for compilation of parallel replacement rules into transducers. The current paper identifies the role of generalized lenient composition (GLC) in this method. Thanks to the GLC operation, the compilation method becomes bipartite and easily extendible to capture various application modes. In the light of three notions of obligatoriness, a modification to the compilation method is proposed. We argue that the bipartite design makes implementation of parallel obligatoriness, directionality, length and rank based application modes extremely easy, which is the main result of the paper.
The paper investigates the question of sustainability of capacity building initiatives by reporting about the multiplication training in the frame of DIES NMT Programme on quality assurance in Uganda and how it could make use of the social capital within the existing quality assurance network to sustain and address challenges during its implementation. The purpose of the article is to explore the nature of networking (social and institutional) which was established by the Ugandan Universities Quality Assurance Forum (UUQAF) and share the strategies used in this training experience for future sustainable capacity building training initiatives in emerging economies. The paper employed a qualitative research method to describe and analyse the training framework based on primary and secondary documents.
The Semantics of Ellipsis
(2005)
There are four phenomena that are particularly troublesome for theories of ellipsis: the existence of sloppy readings when the relevant pronouns cannot possibly be bound; an ellipsis being resolved in such a way that an ellipsis site in the antecedent is not understood in the way it was there; an ellipsis site drawing material from two or more separate antecedents; and ellipsis with no linguistic antecedent. These cases are accounted for by means of a new theory that involves copying syntactically incomplete antecedent material and an analysis of silent VPs and NPs that makes them into higher order definite descriptions that can be bound into.
The traditional purpose of algorithm in education is to prepare students for programming. In our effort to introduce the practically missing computing science into Czech general secondary education, we have revisited this purpose.We propose an approach, which is in better accordance with the goals of general secondary education in Czechia. The importance of programming is diminishing, while recognition of algorithmic procedures and precise (yet concise) communication of algorithms is gaining importance. This includes expressing algorithms in natural language, which is more useful for most of the students than programming. We propose criteria to evaluate such descriptions. Finally, an idea about the limitations is required (inefficient algorithms, unsolvable problems, Turing’s test). We describe these adjusted educational goals and an outline of the resulting course. Our experience with carrying out the proposed intentions is satisfactory, although we did not accomplish all the defined goals.
Recent work has shown that English-learning 18-month-olds can detect the relationship between discontinuous morphemes such as is and -ing in Grandma is always running (Gomez, 2002; Santelmann & Jusczyk, 1998) but only at a maximum of 3 intervening syllables. In this article we examine the tracking of discontinuous dependencies in children acquiring German. Due to freer word order, German allows for greater distances between dependent elements and a greater syntactic variety of the intervening elements than English does. The aim of this study was to investigate whether factors other than distance may influence the child’s capacity to recognize discontinuous elements. Our findings provide evidence that children’s recognition capacities are affected not only by distance but also by their ability to linguistically analyze the material intervening between the dependent elements. We speculate that this result supports the existence of processing mechanisms that reduce a discontinuous relation to a local one based on subcategorization relations.
Jacob Brandon Maduro’s Memoirs and Related Observations (Havana, 1953) speak to the lasting yet malleable legacy of Jewish Caribbean/Atlantic mercantile communities that defined early modern settlement in the Americas. A close reading of the Memoirs, alongside relevant archival records and community narratives, lends new perspectives to scholarship on Port Jewries and the Atlantic Diaspora. Specifically concerned with Jacob’s adoption of such leading intellectual and political tropes as the Monroe doctrine, José Martí’s Nuestra America, and a Zionism that evolved from an ideology to a reality, the Memoirs reveal a narrative at once defined by the tremendous upheavals of the first half of the 20th century, and an enduring sense of Jewish diasporic peoplehood defined through a Port Jew paradigm whereby the preservation of Jewish ethnicity is understood as synonymous with the championing of modernity.
Ethical issues surrounding modern computing technologies play an increasingly important role in the public debate. Yet, ethics still either doesn’t appear at all or only to a very small extent in computer science degree programs. This paper provides an argument for the value of ethics beyond a pure responsibility perspective and describes the positive value of ethical debate for future computer scientists. It also provides a systematic analysis of the module handbooks of 67 German universities and shows that there is indeed a lack of ethics in computer science education. Finally, we present a principled design of a compulsory course for undergraduate students.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.
This chapter deals with the problem that theories of peace building, conflict resolution and reconciliation were predominately created in the West and, therefore, do not necessarily fit the understanding of peace, conflict, and resolution in non-Western societies and cultures. Within these societies, the acceptance of suffering may also be higher, which leads to different priorities of conflict resolution approaches. Furthermore, this chapter deals with the question of whether the current understanding of wars and the nature of conflict change the basis of established conflict theories. These theoretical approaches are then applied in Sierra Leone as a non-Western negotiation scenario.
Starting in 2009, the German state of Saxony distributed sports club membership vouchers among all 33,000 third graders in the state. The policy’s objective was to encourage them to develop a long-term habit of exercising. In 2018, we carried out a large register-based survey among several cohorts in Saxony and two neighboring states. Our difference-in-differences estimations show that, even after a decade, awareness of the voucher program was significantly higher in the treatment group. We also find that youth received and redeemed the vouchers. However, we do not find significant short- or long-term effects on sports club membership, physical activity, overweightness, or motor skills.
We argue that there is a crucial difference between determiner and adverbial quantification. Following Herburger [2000] and von Fintel [1994], we assume that determiner quantifiers quantify over individuals and adverbial quantifiers over eventualities. While it is usually assumed that the semantics of sentences with determiner quantifiers and those with adverbial quantifiers basically come out the same, we will show by way of new data that quantification over events is more restricted than quantification over individuals. This is because eventualities in contrast to individuals have to be located in time which is done using contextual information according to a pragmatic resolution strategy. If the contextual information and the tense information given in the respective sentence contradict each other, the sentence is uninterpretable. We conclude that this is the reason why in these cases adverbial quantification, i.e. quantification over eventualities, is impossible whereas quantification over individuals is fine.
We study the influence of clumping on the predicted wind structure of O-type stars. For this purpose we artificially include clumping into our stationary wind models. When the clumps are assumed to be optically thin, the radiative line force increases compared to corresponding unclumped models, with a similar effect on either the mass-loss rate or the terminal velocity (depending on the onset of clumping). Optically thick clumps, alternatively, might be able to decrease the radiative force.
Mass loss is a very important aspect of the life of massive stars. After briefly reviewing its importance, we discuss the impact of the recently proposed downward revision of mass loss rates due to clumping (difficulty to form Wolf-Rayet stars and production of critically rotating stars). Although a small reduction might be allowed, large reduction factors around ten are disfavoured. We then discuss the possibility of significant mass loss at very low metallicity due to stars reaching break-up velocities and especially due to the metal enrichment of the surface of the star via rotational and convective mixing. This significant mass loss may help the first very massive stars avoid the fate of pair-creation supernova, the chemical signature of which is not observed in extremely metal poor stars. The chemical composition of the very low metallicity winds is very similar to that of the most metal poor star known to date, HE1327-2326 and offer an interesting explanation for the origin of the metals in this star. We also discuss the importance of mass loss in the context of long and soft gamma-ray bursts and pair-creation supernovae. Finally, we would like to stress that mass loss in cooler parts of the HR-diagram (luminous blue variable and yellow and red supergiant stages) are much more uncertain than in the hot part. More work needs to be done in these areas to better constrain the evolution of the most massive stars.
The end of the cold war division of the Baltic Sea in 1989, and the three Baltic states’ return to independence in 1991 created new opportunities for the decision-makers of the area, as well as new possibilities for fashioning security in the region. This article will examine the security debate affecting the Baltic Sea region in the post-cold war period, and in particular, the relevance of the European Union to that debate. The following section will examine various concepts of security relevant to the Baltic region; the third section looks at the EU and the Baltic area; and the last part deals with the implications that EU membership by the Baltic Sea states may have for the security of the Baltic Sea zone.
The end of culture?
(2000)
We review the effects of clumping on the profiles of resonance doublets. By allowing the ratio of the doublet oscillator strenghts to be a free parameter, we demonstrate that doublet profiles contain more information than is normally utilized. In clumped (or porous) winds, this ratio can lies between unity and the ratio of the f-values, and can change as a function of velocity and time, depending on the fraction of the stellar disk that is covered by material moving at a particular velocity at a given moment. Using these insights, we present the results of SEI modeling of a sample of B supergiants, ζ Pup and a time series for a star whose terminal velocity is low enough to make the components of its Si VIλλ1400 independent. These results are interpreted within the framewrok of the Oskinova et al. (2007) model, and demonstrate how the doublet profiles can be used to extract infromation about wind structure.
During the last few years there was a tremendous growth of scientific activities in the fields related to both Physics and Control theory: nonlinear dynamics, micro- and nanotechnologies, self-organization and complexity, etc. New horizons were opened and new exciting applications emerged. Experts with different backgrounds starting to work together need more opportunities for information exchange to improve mutual understanding and cooperation. The Conference "Physics and Control 2007" is the third international conference focusing on the borderland between Physics and Control with emphasis on both theory and applications. With its 2007 address at Potsdam, Germany, the conference is located for the first time outside of Russia. The major goal of the Conference is to bring together researchers from different scientific communities and to gain some general and unified perspectives in the studies of controlled systems in physics, engineering, chemistry, biology and other natural sciences. We hope that the Conference helps experts in control theory to get acquainted with new interesting problems, and helps experts in physics and related fields to know more about ideas and tools from the modern control theory.
Development of competence-oriented curricula is still an important theme in informatics education. Unfortunately informatics curricula, which include the domain of logic programming, are still input-orientated or lack detailed competence descriptions. Therefore, the development of competence model and of learning outcomes' descriptions is essential for the learning process in this domain. A prior research developed both. The next research step is to formulate test items to measure the described learning outcomes. This article describes this procedure and exemplifies test items. It also relates a test in school to the items and shows which misconceptions and typical errors are important to discuss in class. The test result can also confirm or disprove the competence model. Therefore, this school test is important for theoretical research as well as for the concrete planning of lessons. Quantitative analysis in school is important for evaluation and improvement of informatics education.
Temporal propositions are mapped to sets of strings that witness (in a precise sense) the propositions over discrete linear Kripke frames. The strings are collected into regular languages to ensure the decidability of entailments given by inclusions between languages. (Various notions of bounded entailment are shown to be expressible as language inclusions.) The languages unwind computations implicit in the logical (and temporal) connectives via a system of finite-state constraints adapted from finite-state morphology. Applications to Hybrid Logic and non-monotonic inertial reasoning are briefly considered.
Team diversity
(2007)
Team diversity refers to the differences between team members on any attribute that may lead each single member of the group to perceive any other member of the group as being different from the self of this particular member. These attributes and perceptions refer to all dimensions people can differ on, such as age, gender, ethnicity, religious and functional background, personality, skills, abilities, beliefs, and attitudes.
In this paper, we show how the theory of NP completeness can be introduced to students in secondary schools. The motivation of this research is that although there are difficult issues that require technical backgrounds, students are already familiar with demanding computational problems through games such as Sudoku or Tetris. Our intention is to bring together important concepts in the theory of NP completeness in such a way that students in secondary schools can easily understand them. This is part of our ongoing research about how to teach fundamental issues in Computer Science in secondary schools. We discuss what needs to be taught in which sequence in order to introduce ideas behind NP completeness to students without technical backgrounds.
This paper presents a system for the detection and correction of syntactic errors. It combines a robust morphosyntactic analyser and two groups of finite-state transducers specified using the Xerox Finite State Tool (xfst). One of the groups is used for the description of syntactic error patterns while the second one is used for the correction of the detected errors. The system has been tested on a corpus of real texts, containing both correct and incorrect sentences, with good results.
This paper describes the key aspects of the system SynCoP (Syntactic Constraint Parser) developed at the Berlin-Brandenburgische Akademie der Wissenschaften. The parser allows to combine syntactic tagging and chunking by means of constraint grammar using weighted finite state transducers (WFST). Chunks are interpreted as local dependency structures within syntactic tagging. The linguistic theories are formulated by criteria which are formalized by a semiring; these criteria allow structural preferences and gradual grammaticality. The parser is essentially a cascade of WFSTs. To find the most likely syntactic readings a best-path search is used.
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several "semiotic layers", modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
We exploit time-series $FUSE$ spectroscopy to {\it uniquely} probe spatial structure and clumping in the fast wind of the central star of the H-rich planetary nebula NGC~6543 (HD~164963). Episodic and recurrent optical depth enhancements are discovered in the P{\sc v} absorption troughs, with some evidence for a $\sim$ 0.17-day modulation time-scale. The characteristics of these features are essentially identical to the discrete absorption components' (DACs) commonly seen in the UV lines of massive OB stars, suggesting the temporal structures seen in NGC~6543 likely have a physical origin that is similar to that operating in massive, luminous stars. The mechanism for forming coherent perturbations in the outflows is therefore apparently operating equally in the radiation-pressure-driven winds of widely differing momenta ($\mdot$$v_\infty$$R_\star^{0.5}$) and flow times, as represented by OB stars and CSPN.
An approach to the development of fluorescent probes to follow polymerizations in situ using fluorinated cross-conjugated enediynes (Y-enynes) is reported. Different substitution patterns in the Y-enynes result in distinct solvatochromic behavior. β,β-Bis(phenylethynyl)pentafluorostyrene 7, which bears no donor substituents and only fluorine at the styrene moiety, shows no solvatochromism. Donor substituted β,β-bis(3,4,5-trimethoxyphenylethynyl) pentafluorostyrene 8 and β,β-bis(4-butyl-2,3,5,6-tetrafluorophenylethynyl)-3,4,5-trimethoxystyrene 9 exhibit solvatochromism upon change of solvent polarity. Y-enyne 8 showed the largest solvatochromic shift (94 nm bathochromic shift) upon changing solvent from cyclohexane to acetonitrile. A smaller solvatochromic response (44 nm bathochromic shift) was observed for 9. Lippert–Mataga treatment of 8 and 9 yields slopes of -10,800 and -6,400 cm -1, respectively. This corresponds to a change in dipole moment of 9.6 and 6.9 D, respectively. The solvatochromic behavior in 8 and 9 supports the formation of an intramolecular charge transfer (ICT) state. The low fluorescence quantum yields are caused by competitive double bond rotation. The fluorescence decay time of 9 decreases in methyltetrahydrofuran from 2.1 ns at 77 K to 0.11 ns at 200 K. Efficient single bond rotation in 9 was frozen at -50 °C in a configuration in which the trimethoxyphenyl ring is perpendicular to the fluorinated rings. 7–9 are photostable compounds. The X-ray structure of 7 shows it is not planar and that its conjugation is distorted. Y-enyne 7 stacks in the solid state showing coulombic, actetylene–arene, and fluorine–π interactions.
Transitional justice is conventionally theorized as how a society deals with past injustices after regime change and alongside democratization. Nonetheless, scholars have not reached a consensus on what is to be included or excluded. Recent ideas of transformative justice seek to expand the understanding of transitional justice to include systemic restructuring and socioeconomic considerations. In the context of Nicaragua — where two transitions occurred within an 11-year span — very little transitional justice took place, in terms of the conventional concept of top-down legalistic mechanisms; however, distinct structural changes and socioeconomic policies can be found with each regime change. By analyzing the transformative justice elements of Nicaragua’s dual transition, this chapter seeks to expand the understanding of transitional justice to include how these factors influence goals of transitions such as sustainable peace and reconciliation for past injustices. The results argue for increased attention to transformative justice theories and a more nuanced conception of justice.
Significant seasonal variation in size at settlement has been observed in newly settled larvae of Dreissena polymorpha in Lake Constance. Diet quality, which varies temporally and spatially in freshwater habitats, has been suggested as a significant factor influencing life history and development of freshwater invertebrates. Accordingly, experiments were conducted with field-collected larvae to test the hypothesis that diet quality can determine planktonic larval growth rates, size at settlement and subsequent post-metamorphic growth rates. Larvae were fed one of two diets or starved. One diet was composed of cyanobacterial cells which are deficient in polyunsaturated fatty acids (PUFAs), and the other was a mixed diet rich in PUFAs. Freshly metamorphosed animals from the starvation treatment had a carbon content per individual 70% lower than that of larvae fed the mixed diet. This apparent exhaustion of larval internal reserves resulted in a 50% reduction of the postmetamorphic growth rates. Growth was also reduced in animals previously fed the cyanobacterial diet. Hence, low food quantity or low food quality during the larval stage of D. polymorpha lead to irreversible effects for postmetamorphic animals, and is related to inferior competitive abilities.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
The interest in extensions of the logic programming paradigm beyond the class of normal logic programs is motivated by the need of an adequate representation and processing of knowledge. One of the most difficult problems in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the ‘intended’ partial models of generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [Prz91]. The presented preference criterion defines a partial model of a generalized logic program as intended if it is generated by a stationary chain. It turns out that the stationary generated models coincide with the stationary models on the class of normal logic programs. The general wellfounded semantics of such a program is defined as the set-theoretical intersection of its stationary generated models. For normal logic programs the general wellfounded semantics equals the wellfounded semantics.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
The spectral efficiency of blackness induction was measured in three normal trichromatic observers and in one deuteranomalous observer. The psychophysical task was to adjust the radiance of a monochromatic 60–120′ annulus until a 45′ central broadband field just turned black and its contour became indiscriminable from a dark surrounding gap that separated it from the annulus. The reciprocal of the radiance required to induce blackness with annulus wavelengths between 420 and 680 nm was used to define a spectral-efficiency function for the blackness component of the achromatic process. For each observer, the shape of this blackness-sensitivity function agreed with the spectral-efficiency function based on heterochromatic flicker photometry when measured with the same 60–120′ annulus. Both of these functions matched the Commission Internationale de l'Eclairage Vλ function except at short wavelengths. Ancillary measurements showed that the latter difference in sensitivity can be ascribed to nonuniformities of preretinal absorption, since the annular field excluded the central 60′ of the fovea. Thus our evidence indicates that, at least to a good first approximation, induced blackness is inversely related to the spectral-luminosity function. These findings are consistent with a model that separates the achromatic and the chromatic pathways.
The topography of first-order catchments in a region of western Amazonia was found to exhibit distinctive, recurrent features: a steep, straight lower side slope, a flat or nearly flat terrace at an intermediate elevation between valley floor and interfluve, and an upper side slope connecting interfluve and intermediate terrace. A detailed survey of soil-saturated hydraulic conductivity (K sat)-depth relationships, involving 740 undisturbed soil cores, was conducted in a 0.75-ha first-order catchment. The sampling approach was stratified with respect to the above slope units. Exploratory data analysis suggested fourth-root transformation of batches from the 0–0.1 m depth interval, log transformation of batches from the subsequent 0.1 m depth increments, and the use of robust estimators of location and scale. The K sat of the steep lower side slope decreased from 46 to 0.1 mm/h over the overall sampling depth of 0.4 m. The corresponding decrease was from 46 to 0.1 mm/h on the intermediate terrace, from 335 to 0.01 mm/h on the upper side slope, and from 550 to 0.015 mm/h on the interfluve. A depthwise comparison of these slope units led to the formulation of several hypotheses concerning the link between K sat and topography.
The effect of moderate rates of nitrogen deposition on ground floor vegetation is poorly predicted by uncontrolled surveys or fertilization experiments using high rates of nitrogen (N) addition. We compared the temporal trends of ground floor vegetation in permanent plots with moderate (7–13 kg ha−1 year−1) and lower bulk N deposition (4–6 kg ha−1 year−1) in southern Sweden during 1982–1998. We examined whether trends differed between growth forms (vascular plants and bryophytes) and vegetation types (three types of coniferous forest, deciduous forest, and bog). Trends of site-standardized cover and richness varied among growth forms, vegetation types, and deposition regions. Cover in spruce forests decreased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs cover decreased faster with low deposition. Cover of bryophytes in spruce forests increased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs and deciduous forests there was a strong non-linear increase with moderate deposition. The trend of number of vascular plants was constant with moderate and decreased with low deposition. We found no trend in the number of bryophyte species. We propose that the decrease of cover and number with low deposition was related to normal ecosystem development (increased shading), suggesting that N deposition maintained or increased the competitiveness of some species in the moderate-deposition region. Deposition had no consistent negative effect on vegetation suggesting that it is less important than normal successional processes.
Since Harris’ parser in the late 50s, multiword units have been progressively integrated in parsers. Nevertheless, in the most part, they are still restricted to compound words, that are more stable and less numerous. Actually, language is full of semi-fixed expressions that also form basic semantic units: semi-fixed adverbial expressions (e.g. time), collocations. Like compounds, the identification of these structures limits the combinatorial complexity induced by lexical ambiguity. In this paper, we detail an experiment that largely integrates these notions in a finite-state procedure of segmentation into super-chunks, preliminary to a parser.We show that the chunker, developped for French, reaches 92.9% precision and 98.7% recall. Moreover, multiword units realize 36.6% of the attachments within nominal and prepositional phrases.
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
The P v λλ1118, 1128 resonance doublet is an extraordinarily useful diagnostic of O-star winds, because it bypasses the traditional problems associated with determining mass-loss rates from UV resonance lines. We discuss critically the assumptions and uncertainties involved with using P v to diagnose mass-loss rates, and conclude that the large discrepancies between massloss rates determined from P v and the rates determined from “density squared” emission processes pose a significant challenge to the “standard model” of hot-star winds. The disparate measurements can be reconciled if the winds of O-type stars are strongly clumped on small spatial scales, which in turn implies that mass-loss rates based on Hα or radio emission are too large by up to an order of magnitude.
We present XMM-Newton Reflection Grating Spectrometer observations of pairs of X-ray emission line profiles from the O star ζ Pup that originate from the same He-like ion. The two profiles in each pair have different shapes and cannot both be consistently fit by models assuming the same wind parameters. We show that the differences in profile shape can be accounted for in a model including the effects of resonance scattering, which affects the resonance line in the pair but not the intercombination line. This implies that resonance scattering is also important in single resonance lines, where its effect is difficult to distinguish from a low effective continuum optical depth in the wind. Thus, resonance scattering may help reconcile X-ray line profile shapes with literature mass-loss rates.
Small livestock is an important resource for rural human populations in dry climates. How strongly will climate change affect the capacity of the rangeland? We used hierarchical modelling to scale quantitatively the growth of shrubs and annual plants, the main food of sheep and goats, to the landscape extent in the eastern Mediterranean region. Without grazing, productivity increased in a sigmoid way with mean annual precipitation. Grazing reduced productivity more strongly the drier the landscape. At a point just under the stocking capacity of the vegetation, productivity declined precipitously with more intense grazing due to a lack of seed production of annuals. We repeated simulations with precipitation patterns projected by two contrasting IPCC scenarios. Compared to results based on historic patterns, productivity and stocking capacity did not differ in most cases. Thus, grazing intensity remains the stronger impact on landscape productivity in this dry region even in the future.
Just and Carpenter (1980) presented a theory of reading based on eye fixations wherein their "psycholinguistic" variables accounted for 72% of the variance in word gaze durations. This comment raises some statistical and theoretical problems with their use of simultaneous regression analysis of gaze duration measures and with the resulting theory of reading. A major problem was the confounding of perceptual with psycholinguistic factors. New eye fixation data are presented to support these criticisms. Analysis of fixations within words revealed that most gaze duration variance was contributed by number of fixations rather than by fixation duration.
We study the time variability of emission lines in three WNE stars : WR 2 (WN2), WR 3 (WN3ha) and WR152 (WN3). While WR 2 shows no variability above the noise level, the other stars do show variation, which are like other WR stars in WR 152 but very fast in WR 3. From these motions, we deduce a value of β ∼1 for WR 3 that is like that seen in O stars and β ∼2–3 for WR 152, that is intermediate between other WR stars and WR 3.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Mass accretion onto compact objects through accretion disks is a common phenomenon in the universe. It is seen in all energy domains from active galactic nuclei through cataclysmic variables (CVs) to young stellar objects. Because CVs are fairly easy to observe, they provide an ideal opportunity to study accretion disks in great detail and thus help us to understand accretion also in other energy ranges. Mass accretion in these objects is often accompanied by mass outflow from the disks. This accretion disk wind, at least in CVs, is thought to be radiatively driven, similar to O star winds. WOMPAT, a 3-D Monte Carlo radiative transfer code for accretion disk winds of CVs is presented.
By quantitatively fitting simple emission line profile models that include both atomic opacity and porosity to the Chandra X-ray spectrum of ζ Pup, we are able to explore the trade-offs between reduced mass-loss rates and wind porosity. We find that reducing the mass-loss rate of ζ Pup by roughly a factor of four, to 1.5 × 10−6 M⊙ yr−1, enables simple non-porous wind models to provide good fits to the data. If, on the other hand, we take the literature mass-loss rate of 6×10−6 M⊙ yr−1, then to produce X-ray line profiles that fit the data, extreme porosity lengths – of h∞ ≈ 3 R∗ – are required. Moreover, these porous models do not provide better fits to the data than the non-porous, low optical depth models. Additionally, such huge porosity lengths do not seem realistic in light of 2-D numerical simulations of the wind instability.
Higher education institutions in Guinea face many challenges, including reporting responsibilities, globalisation, and massification. Institutional evaluations of higher education and research institutions in 2013 could not initiate the implementation of change processes within the institutions. Recently, however, various initiatives have been started to change this situation with the purpose to sensitise and raise awareness and capabilities for quality assurance structures in Guinean HEIs. So far, the emphasis has been put on quality enhancement in higher education, especially on teaching evaluation, curriculum development, as well as on establishing quality assurance structures. This article gives an overview of the state of play and takes stock of the activities that have been initiated to set up quality assurance mechanisms in higher education and research institutions, and presents perspectives for further development of the quality approach in Guinea. The project ‘Quality Assurance Multiplication 2017-2018’ serves as an example to describe approaches and activities in setting up stable quality assurance structures, and to strengthen and raise awareness for a ‘quality culture’.
A fine-grained slope that exhibits slow movement rates was investigated to understand how geohydrological processes contribute to a consecutive development of mass movements in the Vorarlberg Alps, Austria. For that purpose intensive hydrometeorological, hydrogeological and geotechnical observations as well as surveying of surface movement rates were conducted during 1998–2001. Subsurface water dynamics at the creeping slope turned out to be dominated by a three-dimensional pressure system. The pressure reaction is triggered by fast infiltration of surface water and subsequent lateral water flow in the south-western part of the hillslope. The related pressure signal was shown to propagate further downhill, causing fast reactions of the piezometric head at 5Ð5 m depth on a daily time scale. The observed pressure reactions might belong to a temporary hillslope water body that extends further downhill. The related buoyancy forces could be one of the driving forces for the mass movement. A physically based hydrological model was adopted to model simultaneously surface and subsurface water dynamics including evapotranspiration and runoff production. It was possible to reproduce surface runoff and observed pressure reactions in principle. However, as soil hydraulic functions were only estimated on pedotransfer functions, a quantitative comparison between observed and simulated subsurface dynamics is not feasible. Nevertheless, the results suggest that it is possible to reconstruct important spatial structures based on sparse observations in the field which allow reasonable simulations with a physically based hydrological model. Copyright 2005 John Wiley & Sons, Ltd. KEY WORDS rainfall-induced landslides; soil creep; hydrological modelling; Vorarlberg; Austria; pressure propagation
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
INTEGRAL tripled the number of super-giant high-mass X-ray binaries (sgHMXB) known in the Galaxy by revealing absorbed and fast transient (SFXT) systems. Quantitative constraints on the wind clumping of massive stars can be obtained from the study of the hard X-ray variability of SFXT. A large fraction of the hard X-ray emission is emitted in the form of flares with a typical duration of 3 ksec, frequency of 7 days and luminosity of $10^{36}$ erg/s. Such flares are most probably emitted by the interaction of a compact object orbiting at $\sim10~R_*$ with wind clumps ($10^{22 ... 23}$ g) representing a large fraction of the stellar mass-loss rate. The density ratio between the clumps and the inter-clump medium is $10^{2 ... 4}$. The parameters of the clumps and of the inter-clump medium, derived from the SFXT flaring behavior, are in good agreement with macro-clumping scenario and line-driven instability simulations. SFXT are likely to have larger orbital radius than classical sgHMXB.
The site of confluence of the artery and the portal vein in the liver still appears to be controversial. Anatomical studies suggested a presinusoidal or an intrasinusoidal confluence in the first, second or even final third of the sinusoids. The objective of this investigation was to study the problem with functional biochemical techniques. Rat livers were perfused through the hepatic artery and simultaneously either in the orthograde direction from the portal vein to the hepatic vein or in the retrograde direction from the hepatic vein to the portal vein. Arterial how was linearly dependent on arterial pressure between 70 cm H2O and 120 cm H2O at a constant portal or hepatovenous pressure of 18 cm H2O. An arterial pressure of 100 cm H2O was required for the maintenance of a homogeneous orthograde perfusion of the whole parenchyma and of a physiologic ratio of arterial to portal how of about 1:3. Glucagon was infused either through the artery or the portal vein and hepatic vein, respectively, to a submaximally effective ''calculated'' sinusoidal concentration after mixing of 0.1 nmol/L. During orthograde perfusions, arterial and portal glucagon caused the same increases in glucose output. Yet during retrograde perfusions, hepatovenous glucagon elicited metabolic alterations equal to those in orthograde perfusions, whereas arterial glucagon effected changes strongly reduced to between 10% and 50%. Arterially infused trypan blue was distributed homogeneously in the parenchyma during orthograde perfusions, whereas it reached clearly smaller areas of parenchyma during retrograde perfusions. Finally, arterially applied acridine orange was taken up by all periportal hepatocytes in the proximal half of the acinus during orthograde perfusions but only by a much smaller portion of periportal cells in the proximal third of the acinus during retrograde perfusions. These findings suggest that in rat liver, the hepatic artery and the portal vein mix before and within the first third of the sinusoids, rather than in the middle or even last third.
Preface
(2010)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, , interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
The factors that determine the efficiency of energy transfer in aquatic food webs have been investigated for many decades. The plant-animal interface is the most variable and least predictable of all levels in the food web. In order to study determinants of food quality in a large lake and to test the recently proposed central importance of the long-chained eicosapentaenoic acid (EPA) at the pelagic producer-grazer interface, we tested the importance of polyunsaturated fatty acids (PUFAs) at the pelagic producerconsumer interface by correlating sestonic food parameters with somatic growth rates of a clone of Daphnia galeata. Daphnia growth rates were obtained from standardized laboratory experiments spanning one season with Daphnia feeding on natural seston from Lake Constance, a large pre-alpine lake. Somatic growth rates were fitted to sestonic parameters by using a saturation function. A moderate amount of variation was explained when the model included the elemental parameters carbon (r2 = 0.6) and nitrogen (r2 = 0.71). A tighter fit was obtained when sestonic phosphorus was incorporated (r2 = 0.86). The nonlinear regression with EPA was relatively weak (r2 = 0.77), whereas the highest degree of variance was explained by three C18-PUFAs. The best (r2 = 0.95), and only significant, correlation of Daphnia's growth was found with the C18-PUFA α-linolenic acid (α-LA; C18:3n-3). This correlation was weakest in late August when C:P values increased to 300, suggesting that mineral and PUFA-limitation of Daphnia's growth changed seasonally. Sestonic phosphorus and some PUFAs showed not only tight correlations with growth, but also with sestonic α-LA content. We computed Monte Carlo simulations to test whether the observed effects of α-LA on growth could be accounted for by EPA, phosphorus, or one of the two C18-PUFAs, stearidonic acid (C18:4n-3) and linoleic acid (C18:2n-6). With >99 % probability, the correlation of growth with α-LA could not be explained by any of these parameters. In order to test for EPA limitation of Daphnia's growth, in parallel with experiments on pure seston, growth was determined on seston supplemented with chemostat-grown, P-limited Stephanodiscus hantzschii, which is rich in EPA. Although supplementation increased the EPA content 80-800x, no significant changes in the nonlinear regression of the growth rates with α-LA were found, indicating that growth of Daphnia on pure seston was not EPA limited. This indicates that the two fatty acids, EPA and α-LA, were not mutually substitutable biochemical resources and points to different physiological functions of these two PUFAs. These results support the PUFA-limitation hypothesis for sestonic C:P < 300 but are contrary to the hypothesis of a general importance of EPA, since no evidence for EPA limitation was found. It is suggested that the resource ratios of EPA and α-LA rather than the absolute concentrations determine which of the two resources is limiting growth.
Amphiphilic derivatives of octadiene and docosadiene were investigated in monolayers and Langmuir-Blodgett multilayers, with respect to their self-organization and their polymerization behavior. All amphiphiles investigated form monolayers. However, only acid and alcohol derivatives were able to build up multilayers. Those multilayers are rapidly photopolymerized in the layers via a two-step process: Irradiation with long-wavelength UV light yields soluble polymers, whereas additional irradiation with sfiort-wavelength UV light produces insoluble and presumably cross-linked polymers. The reaction meclianism is discussed according to the polymer characterization by UV spectroscopy, small-angle X-ray scattering, NMR spectroscopy, and gel permeation chromatography. All multilayers undergo structural changes during the polymerization; substantial changes result in defects in the polymerized layers as observed by scanning electron microscopy. In contrast to the acids and alcohols, the deposition of monolayers of the aldehyde derivatives did not yield well-ordered multilayers, but rather amorphous films. In this different film structure, the photopolymerization process differs from the one observed in multilayers.
The use of nano zerovalent iron (nZVI) for environmental remediation is a promising new technique for in situ remediation. Due to its high surface area and high reactivity, nZVI is able to dechlorinate organic contaminants and render them harmless. Limited mobility, due to fast aggregation and sedimentation of nZVI, limits the capability for source and plume remediation. Carbo-Iron is a newly developed material consisting of activated carbon particles (d50 = 0,8 µm) that are plated with nZVI particles. These particles combine the mobility of activated carbon and the reactivity of nZVI. This paper presents the first results of the transport experiments.
The birth of the Yishuv’s national shipping company, ZIM was preceded by private enterprise; the sea had not traditionally been a focus of the Zionist movement. In the 1930s, a five-year span of private commercial shipping saw three companies in the Jewish community in Palestine – Palestine Shipping Company, Palestine Maritime Lloyd, and Atid – before shipping was cut short by the outbreak of the Second World War. Despite their brief lifespans and their negligible contribution to general shipping, these companies constituted an important milestone. Their existence helped shift the Yishuv leadership’s attitudes about shipping’s importance for the community and the need for it to be supported by national institutions.
In the last years, statistical machine translation has already demonstrated its usefulness within a wide variety of translation applications. In this line, phrase-based alignment models have become the reference to follow in order to build competitive systems. Finite state models are always an interesting framework because there are well-known efficient algorithms for their representation and manipulation. This document is a contribution to the evolution of finite state models towards a phrase-based approach. The inference of stochastic transducers that are based on bilingual phrases is carefully analysed from a finite state point of view. Indeed, the algorithmic phenomena that have to be taken into account in order to deal with such phrase-based finite state models when in decoding time are also in-depth detailed.
Cinnamic acid moieties were incorporated into amphiphilic compounds containing one and two alkyl chains. These lipid-like compounds with photoreactive units undergo self-organization to form monolayers at the gas-water interface and bilayer structures (vesicles) in aqueous solutions. The photoreaction of the cinnamic acid moiety induced by 254 nm UV light was investigated in the crystalline state, in monolayers, in vesicles and in solution in organic solvents. The single-chain amphiphiles undergo dimerization to yield photoproducts with twice the molecular weight of the corresponding monomers in organized systems. The photoreaction of amphiphiles containing two cinnamic acid groups occurs via two mechanisms: The intramolecular dimerization produces bicycles, with retention of the molecular weight of the corresponding monomer. The intermolecular reaction leads to oligomeric and polymeric photoproducts. In contrast to the single-chain amphiphiles, photodimerization processes of lipoids containing two cinnamic acid moieties also occur in solution in organic solvents.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
We present an algorithm that computes a function that assigns consecutive integers to trees recognized by a deterministic, acyclic, finite-state, bottom-up tree automaton. Such function is called minimal perfect hashing. It can be used to identify trees recognized by the automaton. Its value may be seen as an index in some other data structures. We also present an algorithm for inverted hashing.
Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam Sentence Corpus
(2008)
The surprisal of a word on a probabilistic grammar constitutes a promising complexity metric for human sentence comprehension difficulty. Using two different grammar types, surprisal is shown to have an effect on fixation durations and regression probabilities in a sample of German readers’ eye movements, the Potsdam Sentence Corpus. A linear mixed-effects model was used to quantify the effect of surprisal while taking into account unigram and bigram frequency, word length, and empirically-derived word predictability; the so-called “early” and “late” measures of processing difficulty both showed an effect of surprisal. Surprisal is also shown to have a small but statistically non-significant effect on empirically-derived predictability itself. This work thus demonstrates the importance of including parsing costs as a predictor of comprehension difficulty in models of reading, and suggests that a simple identification of syntactic parsing costs with early measures and late measures with durations of post-syntactic events may be difficult to uphold.
Eye fixation durations during normal reading correlate with processing difficulty but the specific cognitive mechanisms reflected in these measures are not well understood. This study finds support in German readers’ eyefixations for two distinct difficulty metrics: surprisal, which reflects the change in probabilities across syntactic analyses as new words are integrated, and retrieval, which quantifies comprehension difficulty in terms of working memory constraints. We examine the predictions of both metrics using a family of dependency parsers indexed by an upper limit on the number of candidate syntactic analyses they retain at successive words. Surprisal models all fixation measures and regression probability. By contrast, retrieval does not model any measure in serial processing. As more candidate analyses are considered in parallel at each word, retrieval can account for the same measures as surprisal. This pattern suggests an important role for ranked parallelism in theories of sentence comprehension.
The boundary paradigm (Rayner, 1975) with a novel preview manipulation was used to examine the extent of parafoveal processing of words to the right of fixation. Words n+1 and n+2 had either correct or incorrect previews prior to fixation (prior to crossing the boundary location). In addition, the manipulation utilized either a high or low frequency word in word n+1 location on the assumption that it would be more likely that n+2 preview effects could be obtained when word n+1 was high frequency. The primary findings were that there was no evidence for a preview benefit for word n+2 and no evidence for parafoveal-on-foveal effects when word n+1 is at least four letters long. We discuss implications for models of eye-movement control in reading.
Parafoveal Load of Word N+1 Modulates Preprocessing Effectivenessof Word N+2 in Chinese Reading
(2010)
Preview benefits (PBs) from two words to the right of the fixated one (i.e., word N+2)and associated parafoveal-on-foveal effects are critical for proposals of distributed lexical processing during reading. This experiment examined parafoveal processing during reading of Chinese sentences, using a boundary manipulation of N+2-word preview with low- and high-frequency words N+1. The main findings were (a) an identity PB for word N+2 that was (b) primarily observed when word N+1 was of high frequency (i.e., an interaction between frequency of word N+1 and PB for word N+2), and (c) a parafoveal-on-foveal frequency effect of word N+1 for fixation durations on word N. We discuss implications for theories of serial attention shifts and parallel distributed processing of words during reading.