Filtern
Volltext vorhanden
- ja (246) (entfernen)
Erscheinungsjahr
Dokumenttyp
- Monographie/Sammelband (246) (entfernen)
Sprache
- Englisch (246) (entfernen)
Schlagworte
Institut
- Hasso-Plattner-Institut für Digital Engineering gGmbH (87)
- Wirtschaftswissenschaften (48)
- Hasso-Plattner-Institut für Digital Engineering GmbH (33)
- Department Linguistik (16)
- Extern (16)
- Sozialwissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Institut für Mathematik (7)
- Sonderforschungsbereich 632 - Informationsstruktur (6)
- Department Psychologie (4)
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
After promising beginnings towards transformation, in 1991 the Bulgarian economy fell into deep crisis in the period from 1995 to 1997. Social policy, already overstrained due to the demands of transition, was unable to cope effectively with the rapidly spreading state of emergency. The following essay analyses the development of the social indicators and instruments of social security in the years 1990 to 1998. In addition to unemployment and unemployment insurance, the issue of pensions and poverty will also be examined.
Privatisation in Central and Eastern Europe can be defined as the transfer of property rights from the State to private owners. The transfers are carried out so as to vest the new private owners with the full property rights of use and disposal over their property, these rights being guaranteed by the legal framework established by the rule of law. In Bulgaria, one can distinguish between three main stages in the process of privatisation. Each was shaped by the conflicting resolutions of frequently changing governments and meant to serve different political goals. The first stage (1990-1993) is characterised by the blockade of legal privatisation, as ‘spontaneous privatisation’ was accorded high priority. As in other former socialist countries, great emphasis was placed on the so-called commercialisation of state-owned enterprises. This did not involve the actual transfer of State property into private hands, but rather the independent transformation of state-owned enterprises into joint-stock companies, as well as the establishment of subsidiary companies.1 The goals of introducing more efficient structures and applying modern methods of production by transferring property to a more suitable management were not achieved. The second stage (1993-1995) is a cash privatisation, which laid the foundation for an employee/management buy-out, aided by the legal provisions granting concessions in the payment of instalments. The most important factor in the third stage of the process of privatisation in Bulgaria was the adoption of the mass privatisation model as an alternative method of procedure. In 1996, legal regulations for mass privatisation were introduced and a privatisation fund was established. In the meantime, the process has evolved into its fourth stage, during which a strategy of privatisation has been formulated under the supervision of a monetary council, and various agreements with the IMF and the World Bank are being adhered to. Privatisation is the decisive factor in the structural reforms of East European countries. The problem of converting State property into more effective forms of property management has been exacerbated by the additional demand of carrying out the far-reaching structural changes as swiftly as possible. The expectation that a large part of State property would be privatised within a short time in Bulgaria, has not been met for a number of reasons. When the reforms began, the private sector was too weakly developed to become a catalyst for structural changes. Until 1995 there were no laws regulating the stock exchange or securities and bonds - the capital market was practically non-existent. Moreover, the various political parties could not agree upon the various models and objectives of privatisation. The population itself had no capital. The restitution of private ownership which will not be discussed in further detail was limited to the smallest businesses, traders and workshops. Furthermore, the Privatisation Agency and State authorities employed to initiate the privatisation process lacked experience. Another problem hindering privatisation was that the laws passed lacked precision and were constantly subject to change.
Inhalt: Grundgedanken zur Entwicklung von Leitbildern -Leitbilder im Kontext eines Stadtmarketingkonzeptes -Ein Modell zur Entwicklung von Leitbildern -Das Leitbild als ein Element der Entwicklung eines Stadtmarketing- Konzepts -Funktion von Leitbildern -Anforderungen an Leitbilder Beispiele zur Leitbildentwicklung für die Städte Hennigsdorf und Potsdam
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
Industrial policy and social strategy at the corporate level in Poland : questionnaire results
(1999)
This paper presents results from a survey of industrial policy of the state and the social security system at the corporate level in Poland. Previous reports in this area indicated preferable directions of research to be taken in order to prove various hypotheses of the purposefulness of an integral approach to industrial policy and social security in the analysis of economic processes in transition (see Weikard 1997). This paper summarises the results and draws conclusions from a questionnaire study on subsidies, social benefits and economic policy in Polish firms during the process of transformation. Our results and conclusions show the scope and character of the processes in the area of industrial and social policy in the period 1994 to 1997. The paper is divided into five parts. The first part concerns the aims and methodology of the questionnaire; it also gives a brief description of the sample. The second part shows how enterprises dealt with the issues of employment and wages in this period. The third part characterises industrial policy at the corporate level, while the next presents results from the survey of various social schemes pursued. The final part aims at an integral approach in the analysis of various processes taking place in Polish enterprises. The survey was conducted in the period April to June 1998. Its aim was to observe certain phenomena occurring at the corporate level. The questionnaire was distributed among the managers, directors and presidents of large-size enterprises, which had been selected to satisfy the following three criteria. Firstly, the number of employees had to be considerable (over 300 workers). This criterion was applied following the consideration that certain social phenomena are more conspicuous in enterprises with large manpower. Secondly, only operating enterprises were selected, the enterprises which closed down were disregarded. Finally, for the purposes of the survey the units differed as regards their legal situation and form of ownership. Out of over 1800 enterprises 370 units were drawn where we sent the questionnaire. Unfortunately, as many as 51.9% of the respondents refused co-operation, questions to a certain extent puts the representativeness of the sample in question. Finally, 178 questionnaires were subsequently completed and returned for analysis. However, not all of these questionnaires included full answers to all of the 75 questions; therefore, while discussing the results of the survey we have indicated the number of relevant answers we have received.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
Die vorliegende Arbeit stellt eine kritische Übersicht über den Forschungsstand zu multiplen Wh-Konstruktionen im Slavischen dar. Das Ziel ist es, die Unklarheit der Datenlage und die Widersprüchlichkeit der auf solchen "unklaren" Daten basierten Theorien aufzuzeigen. Inhalt: Historischer Hintergrund (Wachowicz 1974) Einige ältere Ansätze Höhepunkt: die folgenschwere Arbeit von Rudin (1988) Probleme: - Das Problem der Zuverlässlichkeit von Daten - Das Problem der Relevanz von Daten "Harte" Fakten: - Strikte Superioritätseffekte im Bulgarischen - Obligatorische Wh-Anhebung im Slavischen Neuere Ansätze: - "Qualitative" Ansätze - "Quantitative" Ansätze - Alternative Ansätze
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
New survey data for a panel of Polish firms is used to estimate employment and wage adjustments under various forms of ownership (insider vs. outsider) and asymmetric response to exogenous shocks. In contrast to earlier studies, dynamic panel data estimators (GMM) allow for endogeneity of observed variables and partial adjustment to shocks. Results differ from other findings in the transition literature: wages have little effect on dynamic labor demand and the firm-size wage effect is confirmed. Firms that expand employment have to pay significantly larger wage increases and rising sales add little to employment, suggesting labor hoarding. Dec1ining sales, however, significantly reduce employment and privatization (or anticipation thereof) has the expected benefits.
Privatisation and ownership : the impact on firms in transition survey evidence from Bulgaria
(1999)
Previous papers in this Special Series, have described in detail the theoretical background and development patterns, along with some empirical results, for the privatisation processes in Bulgaria and Poland. A range of issues have been raised which demand closer empirical investigation. For this purpose, the research group has developed questionnaire studies for Bulgaria and Poland. In Bulgaria, the National Statistical Institute (NSI) carried out the case studies between February and April 1998. The problems of the questionnaire set-up were identified in apre-test study, but unlike the Polish case, they led to only minor differentiation. Since financial limitations prevented a larger sample size, a sample size of 61 mid-sized and large Bulgarian enterprises was selected. Failure to respond was not a serious problem, unlike with the Polish questionnaire; this is because the NSI has maintained good links to the enterprise sector and management were prepared to give detailed answers, even on questions of their firms' financial status. However, as the Polish experience suggests, it has become obvious that the privatisation process is also associated with management's increasing reluctance to answer comparatively 'intimate' questions. Thus, future questionnaire studies must take a much higher rate of refusals into consideration. The pre-selection procedure in Bulgaria was determined by the project target, which sought to analyse the effects of the privatisation process on firm' s behaviour during the transition process, and hence only firms which had already existed before the changes were included. For small and medium-size enterprises (SME's), most of which were founded after the changes, partly due to the legal processes of spontaneous privatisation, some empirical, as weIl as analytical, studies were carried out. Thus, the research group limited the scope of investigation to enterprises with more than 250 employees. The underlying hypothesis is that employment problems are concentrated in larger firms, in particular amongst those still (partly) state owned. Because of the former ownership structures and relatively slower capacity for management change, the assumption is that state-owned enterprises (SOE's) which have only been recently privatised might still have traditional links to government even after privatisation. On the one hand, the SME's are obviously more prone to, and linked with, market processes. As a result, they don't have the financial potential and incentives to follow job-hoarding strategies. On the other hand, there are almost no SME's which are still stateowned. Hence, the prevailing opinion in the literature is that 'larger industrial firms were apt to be least efficient, most often producing inadequate and non-competitive products, with a high degree ofunder-utilisation oflabour and most inflexible to change' (lones & Nikolov 1997, p. 252). Thus, as mentioned above, though there may be some limitations with regard to firm representation, our sample characterises a number of enterprises that offer fertile ground for the analysis of firms' adjustment to the newly established market realities in a transition economy. Our study is unique in the sense that existing empirical studies on privatisation and enterprise restructuring generally cover the time period just before and after the initial stages of transition, e.g. 1988/89 to 1992. In those studies, samples of firms in the Czech Republic, Poland, Hungary and Bulgaria recognise that behavioural adaptations at the enterprise level had taken place just before the actual privatisation process materialised. Therefore, almost all of the firms under examination were still state-owned. The firms were usually divided according to their performance as 'good', 'average' and 'bad' enterprises. The main findings of those early studies have shown that the macroeconomic adaptations (i.e., macro-level changes which induced micro-level adjustment by the firms), as well as emerging market structures, have created enormous pressures which in turn have influenced firms' economic behaviour, reallocation of resources and consequent restructuring. This evidence supports the hypothesis that the SOE's started restructuring and adjusting their behaviour and performance, in response to the harsh realities of more open markets, before privatisation actually started. In this paper, we seek to present some results on these developments in Bulgaria, at the later stages of transition and privatisation (1992-1996). The aim of our questionnaire study is therefore to show the effects of the privatisation process and ownership on the behavioural adaptations of firms which had once been state-owned or continue to be owned by the state. The period under investigation is 1992 to 1996. For 1990 and 1991, the number of missing values is reactively high and, where relevant, we partly exclude these observations from our analysis. The paper contains seven sections. Section 11 outlines the macroeconomic environment in which our sample firms operate, provides some specifics of the Bulgarian privatisation process, and discusses data quality. Section 111 concentrates on the analysis of privatisation, the specific forms of ownership that resulted from it, and firm size. In Section IV, we describe the trends of the main economic variables within firms (such as employment, wages, labour productivity, etc), and a number of proxies of firm viability, while Section V presents some regression results to corroborate the discussion of the previous section. Section VI gives an overview of survey results of the impact of enterprise determined wage policy, trade union activity and membership, government control, and social benefits on enterprise restructuring. Section VII is a summary of our findings.
In socialist economies firms have provided various social benefits, like child care, health care, food subsidies, housing etc. Using panel data from Bulgarian and Polish firms, this paper attempts to explain firm-specific provision of social benefits in the process of transition. We investigate empirically with the help of qualitative response models, how ownership type and structure, firm size, profitability, change in management, foreign direct investment, wage and employment policies, union involvement and employee power have impacted the state of non-wage benefits provision.
This article examines the multiple governments of independent Estonia since 1992 referring to their stability. Confronted with the immense problems of democratic transition, the multi-party governments of Estonia change comparatively often. Following the elections of March 2003 the ninth government since 1992 was formed. A detailed examination of government stability and the example of Estonia is accordingly warranted, given that the country is seen as the most successful Central Eastern European transition country in spite of its frequent changes of government. Furthermore, this article questions whether or not internal government stability can exist within a situation where the government changes frequently. What does stability of government mean and what are the varying multi-faceted depths of the term? Before analysing the term, it has to be clarified and defined. It is presumed that government stability is composed of multiple variables influencing one another. Data about the average tenure of a government is not very conclusive. Rather, the deeper political causes for governmental change need to be examined. Therefore, this article discusses the conceptual and theoretical basics of governmental stability first. Secondly, it discusses the Estonian situation in detail up to the elections of 2003, including a short review of the 9th government since independence. In the conclusion, the author explains whether or not the governments of Estonia are stable. In the appendix, the reader finds all election results and also a list of all previous ministers of Estonian governments (all data are as of July 2002).
In reading, word frequency is commonly regarded as the major bottom-up determinant for the speed of lexical access. Moreover, language processing depends on top-down information, such as the predictability of a word from a previous context. Yet, however, the exact role of top-down predictions in visual word recognition is poorly understood: They may rapidly affect lexical processes, or alternatively, influence only late post-lexical stages. To add evidence about the nature of top-down processes and their relation to bottom-up information in the timeline of word recognition, we examined influences of frequency and predictability on event-related potentials (ERPs) in several sentence reading studies. The results were related to eye movements from natural reading as well as to models of word recognition. As a first and major finding, interactions of frequency and predictability on ERP amplitudes consistently revealed top-down influences on lexical levels of word processing (Chapters 2 and 4). Second, frequency and predictability mediated relations between N400 amplitudes and fixation durations, pointing to their sensitivity to a common stage of word recognition; further, larger N400 amplitudes entailed longer fixation durations on the next word, a result providing evidence for ongoing processing beyond a fixation (Chapter 3). Third, influences of presentation rate on ERP frequency and predictability effects demonstrated that the time available for word processing critically co-determines the course of bottom-up and top-down influences (Chapter 4). Fourth, at a near-normal reading speed, an early predictability effect suggested the rapid comparison of top-down hypotheses with the actual visual input (Chapter 5). The present results are compatible with interactive models of word recognition assuming that early lexical processes depend on the concerted impact of bottom-up and top-down information. We offered a framework that reconciles the findings on a timeline of word recognition taking into account influences of frequency, predictability, and presentation rate (Chapter 4).
This paper analyses the macroeconomic developments which have taken place in the Bulgarian economy in the period 1993-1997. The paper also looks at the institutional arrangements and the process of economic policy-making in the country. In this context the problems the Bulgarian economy has experienced in the transition process towards a market-oriented economy are also studied. The paper proceeds as follows: Section 2 looks at the institutional arrangements and the process of economic policy-making through 1995. Section 3 studies the deep economic crisis in 1996 and points out what went wrong in that period. Section 4 continues studying the economic crisis of the Bulgarian economy as well as the problems in the transition process during the first half of 1997. Section 5 looks at the economic developments during the second half of 1997 and points to the prospects for growth in 1998. Section 6 deals with the Bulgarian financial institutions and the existing institutional arrangements. Finally, Section 7 concludes the paper.
We analyse different Gibbsian properties of interactive Brownian diffusions X indexed by the lattice $Z^{d} : X = (X_{i}(t), i ∈ Z^{d}, t ∈ [0, T], 0 < T < +∞)$. In a first part, these processes are characterized as Gibbs states on path spaces of the form $C([0, T],R)Z^{d}$. In a second part, we study the Gibbsian character on $R^{Z}^{d}$ of $v^{t}$, the law at time t of the infinite-dimensional diffusion X(t), when the initial law $v = v^{0}$ is Gibbsian.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
This book deals with the inner life of the capitalist firm. There we find numerous conflicts, the most important of which concerns the individual employment relationship which is understood as a principal-agent problem between the manager, the principal, who issues orders that are to be followed by the employee, the agent. Whereas economic theory traditionally analyses this relationship from a (normative) perspective of the firm in order to support the manager in finding ways to influence the behavior of the employees, such that the latter – ideally – act on behalf of their superior, this book takes a neutral stance. It focusses on explaining individual behavioral patterns and the resulting interactions between the actors in the firm by taking sociological, institutional, and above all, psychological research into consideration. In doing so, insights are gained which challenge many assertions economists take for granted.
While offering significant expressive power, graph transformation systems often come with rather limited capabilities for automated analysis, particularly if systems with many possible initial graphs and large or infinite state spaces are concerned. One approach that tries to overcome these limitations is inductive invariant checking. However, the verification of inductive invariants often requires extensive knowledge about the system in question and faces the approach-inherent challenges of locality and lack of context.
To address that, this report discusses k-inductive invariant checking for graph transformation systems as a generalization of inductive invariants. The additional context acquired by taking multiple (k) steps into account is the key difference to inductive invariant checking and is often enough to establish the desired invariants without requiring the iterative development of additional properties.
To analyze possibly infinite systems in a finite fashion, we introduce a symbolic encoding for transformation traces using a restricted form of nested application conditions. As its central contribution, this report then presents a formal approach and algorithm to verify graph constraints as k-inductive invariants. We prove the approach's correctness and demonstrate its applicability by means of several examples evaluated with a prototypical implementation of our algorithm.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.
For interactive construction of CSG models understanding the layout of a model is essential for its efficient manipulation. To understand position and orientation of aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to assist comprehension. We present a novel real-time rendering technique for visualizing design and spatial assembly of CSG models. As enabling technology we combine an image-space CSG rendering algorithm with blueprint rendering. Blueprint rendering applies depth peeling for extracting layers of ordered depth from polygonal models and then composes them in sorted order facilitating a clear insight of the models. We develop a solution for implementing depth peeling for CSG models considering their depth complexity. Capturing surface colors of each layer and later combining the results allows for generating order-independent transparency as one major rendering technique for CSG models. We further define visually important edges for CSG models and integrate an image-space edgeenhancement technique for detecting them in each layer. In this way, we extract visually important edges that are directly and not directly visible to outline a model’s layout. Combining edges with transparency rendering, finally, generates edge-enhanced depictions of image-based CSG models and allows us to realize their complex, spatial assembly.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Despite its many challenges and limitations the concept of in situ upgrading of informal settlements has become one of the most favoured approaches to the housing crisis in the ‘Global South’. Due to its inherent principles of incremental in situ development, prevention of relocations, protection of local livelihoods and democratic participation and cooperation, this approach is often perceived to be more sustainable than other housing approaches that often rely on quantitative housing delivery and top down planning methodologies. While this study does not question the benefits of the in situ upgrading approach, it seeks to identify problems of its practical implementation within a specific national and local context. The study discusses the origin and importance of this approach on the basis of a review of international housing policy development and analyses the broader political and social context of the incorporation of this approach into South African housing policy. It further uses insights from a recent case study in Cape Town to determine complications and conflicts that can arise when applying in situ upgrading of informal settlements in a complex local context. On that basis benefits and limitations of the in situ upgrading approach are specified and prerequisites for its successful implementation formulated.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Business processes are instrumental to manage work in organisations. To study the interdependencies between business processes, Business Process Architectures have been introduced. These express trigger and message ow relations between business processes. When we investigate real world Business Process Architectures, we find complex interdependencies, involving multiple process instances. These aspects have not been studied in detail so far, especially concerning correctness properties. In this paper, we propose a modular transformation of BPAs to open nets for the analysis of behavior involving multiple business processes with multiplicities. For this purpose we introduce intermediary nets to portray semantics of multiplicity specifications. We evaluate our approach on a use case from the public sector.
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
Constraints allow developers to specify desired properties of systems in a number of domains, and have those properties be maintained automatically. This results in compact, declarative code, avoiding scattered code to check and imperatively re-satisfy invariants. Despite these advantages, constraint programming is not yet widespread, with standard imperative programming still the norm. There is a long history of research on integrating constraint programming with the imperative paradigm. However, this integration typically does not unify the constructs for encapsulation and abstraction from both paradigms. This impedes re-use of modules, as client code written in one paradigm can only use modules written to support that paradigm. Modules require redundant definitions if they are to be used in both paradigms. We present a language – Babelsberg – that unifies the constructs for en- capsulation and abstraction by using only object-oriented method definitions for both declarative and imperative code. Our prototype – Babelsberg/R – is an extension to Ruby, and continues to support Ruby’s object-oriented se- mantics. It allows programmers to add constraints to existing Ruby programs in incremental steps by placing them on the results of normal object-oriented message sends. It is implemented by modifying a state-of-the-art Ruby virtual machine. The performance of standard object-oriented code without con- straints is only modestly impacted, with typically less than 10% overhead compared with the unmodified virtual machine. Furthermore, our architec- ture for adding multiple constraint solvers allows Babelsberg to deal with constraints in a variety of domains. We argue that our approach provides a useful step toward making con- straint solving a generic tool for object-oriented programmers. We also provide example applications, written in our Ruby-based implementation, which use constraints in a variety of application domains, including interactive graphics, circuit simulations, data streaming with both hard and soft constraints on performance, and configuration file Management.