Refine
Has Fulltext
- yes (246) (remove)
Year of publication
Document Type
- Monograph/Edited Volume (246) (remove)
Language
- English (246) (remove)
Keywords
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (87)
- Wirtschaftswissenschaften (48)
- Hasso-Plattner-Institut für Digital Engineering GmbH (33)
- Department Linguistik (16)
- Extern (16)
- Sozialwissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Institut für Mathematik (7)
- Sonderforschungsbereich 632 - Informationsstruktur (6)
- Department Psychologie (4)
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Process models specify behavioral execution constraints between activities as well as between activities and data objects. A data object is characterized by its states and state transitions represented as object life cycle. For process execution, all behavioral execution constraints must be correct. Correctness can be verified via soundness checking which currently only considers control flow information. For data correctness, conformance between a process model and its object life cycles is checked. Current approaches abstract from dependencies between multiple data objects and require fully specified process models although, in real-world process repositories, often underspecified models are found. Coping with these issues, we introduce the concept of synchronized object life cycles and we define a mapping of data constraints of a process model to Petri nets extending an existing mapping. Further, we apply the notion of weak conformance to process models to tell whether each time an activity needs to access a data object in a particular state, it is guaranteed that the data object is in or can reach the expected state. Then, we introduce an algorithm for an integrated verification of control flow correctness and weak data conformance using soundness checking.
Dynamics in urban environments encompasses complex processes and phenomena such as related to movement (e.g.,traffic, people) and development (e.g., construction, settlement). This paper presents novel methods for creating human-centric illustrative maps for visualizing the movement dynamics in virtual 3D environments. The methods allow a viewer to gain rapid insight into traffic density and flow. The illustrative maps represent vehicle behavior as light threads. Light threads are a familiar visual metaphor caused by moving light sources producing streaks in a long-exposure photograph. A vehicle’s front and rear lights produce light threads that convey its direction of motion as well as its velocity and acceleration. The accumulation of light threads allows a viewer to quickly perceive traffic flow and density. The light-thread technique is a key element to effective visualization systems for analytic reasoning, exploration, and monitoring of geospatial processes.
For interactive construction of CSG models understanding the layout of a model is essential for its efficient manipulation. To understand position and orientation of aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to assist comprehension. We present a novel real-time rendering technique for visualizing design and spatial assembly of CSG models. As enabling technology we combine an image-space CSG rendering algorithm with blueprint rendering. Blueprint rendering applies depth peeling for extracting layers of ordered depth from polygonal models and then composes them in sorted order facilitating a clear insight of the models. We develop a solution for implementing depth peeling for CSG models considering their depth complexity. Capturing surface colors of each layer and later combining the results allows for generating order-independent transparency as one major rendering technique for CSG models. We further define visually important edges for CSG models and integrate an image-space edgeenhancement technique for detecting them in each layer. In this way, we extract visually important edges that are directly and not directly visible to outline a model’s layout. Combining edges with transparency rendering, finally, generates edge-enhanced depictions of image-based CSG models and allows us to realize their complex, spatial assembly.
Contents: 1. Capitalist societies as market-bargaining societies on the basis of resources of action: The idealtypical bargain between capital and labour; an alternative to Marx' theory of exploitation - Discussion of the model 2. A general typology of paths of societies in history and a characterisation of state socialism - People's capitalisms as perspective of development - What remains from Marx' ideas? 3. Variations of welfare capitalism after the decline of state socialism 3.1 National differences of welfare capitalism 3.2 Overall inequality of income and overall class consciousness 3.3 Explaining income inequality and variation in class consciousness by class and gender 3.3.1 A test of different class models in the FRG 3.3.2 Developing an international model of gendered occupational and employment status as bundles of resources of action 4. Summary
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
Transitional Justice
(2022)
This publication deals with the topic of transitional justice. In six case studies, the authors link theoretical and practical implications in order to develop some innovative approaches. Their proposals might help to deal more effectively with the transition of societies, legal orders and political systems.
Young academics from various backgrounds provide fresh insights and demonstrate the relevance of the topic. The chapters analyse transitions and conflicts in Sierra Leone, Argentina, Nicaragua, Nepal, and South Sudan as well as Germany’s colonial genocide in Namibia. Thus, the book provides the reader with new insights and contributes to the ongoing debate about transitional justice.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
Since the end of the Apartheid international tourism in South Africa has increasingly gained importance for the national economy. The centre of this PKS issue’s attention is a particular form of tourism: Township tourism, i.e. guided tours to the residential areas of the black population. About 300,000 tourists per year visit the townships of Cape Town. The tours are also called Cultural, Social, or Reality Tours. The different aspects of township tourism in Cape Town were subject of a geographic field study, which was undertaken during a student research project of Potsdam University in 2007. The text at hand presents the empirical results of the field study, and demonstrates how townships are constructed as spaces of tourism.
Version control is a widely used practice among software developers. It reduces the risk of changing their software and allows them to manage different configurations and to collaborate with others more efficiently. This is amplified by code sharing platforms such as GitHub or Bitbucket. Most version control systems track files (e.g., Git, Mercurial, and Subversion do), but some programming environments do not operate on files, but on objects instead (many Smalltalk implementations do). Users of such environments want to use version control for their objects anyway. Specialized version control systems, such as the ones available for Smalltalk systems (e.g., ENVY/Developer and Monticello), focus on a small subset of objects that can be versioned. Most of these systems concentrate on the tracking of methods, classes, and configurations of these. Other user-defined and user-built objects are either not eligible for version control at all, tracking them involves complicated workarounds, or a fixed, domain-unspecific serialization format is used that does not equally suit all kinds of objects. Moreover, these version control systems that are specific to a programming environment require their own code sharing platforms; popular, well-established platforms for file-based version control systems cannot be used or adapter solutions need to be implemented and maintained.
To improve the situation for version control of arbitrary objects, a framework for tracking, converting, and storing of objects is presented in this report. It allows editions of objects to be stored in an exchangeable, existing backend version control system. The platforms of the backend version control system can thus be reused. Users and objects have control over how objects are captured for the purpose of version control. Domain-specific requirements can be implemented. The storage format (i.e. the file format, when file-based backend version control systems are used) can also vary from one object to another. Different editions of objects can be compared and sets of changes can be applied to graphs of objects. A generic way for capturing and restoring that supports most kinds of objects is described. It models each object as a collection of slots. Thus, users can begin to track their objects without first having to implement version control supplements for their own kinds of objects. The proposed architecture is evaluated using a prototype implementation that can be used to track objects in Squeak/Smalltalk with Git. The prototype improves the suboptimal standing of user objects with respect to version control described above and also simplifies some version control tasks for classes and methods as well. It also raises new problems, which are discussed in this report as well.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
Touring Katutura!
(2016)
Guided sightseeing tours of the former township of Katutura have been offered in Windhoek since the mid-1990s. City tourism in the Namibian capital had thus become, at quite an early point in time, part of the trend towards utilising poor urban areas for purposes of tourism – a trend that set in at the beginning of the same decade. Frequently referred to as “slum tourism” or “poverty tourism”, the phenomenon of guided tours around places of poverty has not only been causing some media sensation and much public outrage since its emergence; in the past few years, it has developed into a vital field of scientific research, too. “Global Slumming” provides the grounds for a rethinking of the relationship between poverty and tourism in world society.
This book is the outcome of a study project of the Institute of Geography at the School of Cultural Studies and Social Science of the University of Osnabrueck, Germany. It represents the first empirical case study on township tourism in Namibia. It focuses on four aspects:
1. Emergence, development and (market) structure of township tourism in Windhoek
2. Expectations/imaginations, representations as well as perceptions of the township and its inhabitants from the tourist’s perspective
3. Perception and assessment of township tourism from the residents’ perspective
4. Local economic effects and the poverty-alleviating impact of township tourism
The aim is to make an empirical contribution to the discussion around the tourism-poverty nexus and to an understanding of the global phenomenon of urban poverty tourism.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
This paper studies the persistence of daily returns of 21 German stocks from 1960 to 2008. We apply a widely used test based upon the modified R/S-Method by Lo [1991]. As an extension to Lux [1996] and Carbone et al. [2004] and in analogy to moving average or moving volatility, the statistics is calculated for moving windows of length 4, 8, and 16 years for every time series. Periods of persistence or long memory in returns can be found in some but not all time series. Robustness of results is verified by investigating stationarity and short memory effects.
Time series analysis
(2004)
This volume offers new arguments and perspectives in the ongoing debate about the optimal analysis of verb movement, mainly, but not exclusively, in German. Fanselow and Meinunger deal with verb second (V2) movement in German main clauses. Fanselow argues that head movement of the substitution type follows the standard minimalist conceptions of Merge and Move and is therefore not subject to the same objections as head movement as head adjunction which violates Chomsky's minimalist extension condition, operates countercyclically, and fails to let the moved head c-command its trace. Fanselow argues for V2 movement as head movement of the substitution type. Meinunger discusses a restriction on V2 movement imposed by phrases like "mehr als" ('more than'), as in "Der Wert hat sich weit mehr als verdreifacht" ('the value has far more than tripled') where V2 movement is ruled out (cf. *"Der Wert verdreifachte sich mehr als"). Meinunger claims that this restriction is best analysed in phonological terms: the preposition/complementiser "als" acts as a prefixal clitic to its host, the finite verb, which therefore may not move without it. With respect to the V2 debate, Meinunger argues for an interface perspective. He shows that V2 is restricted from both the conceptual and the phonological interface. Vogel, finally, discusses the syntax of clause-final verbal complexes and their dialectal variation in German. He compares three different syntactic analyses, a minimalist head movement analysis, a minimalist XP movement analysis, and an Optimality theoretic PF movement analysis. The three accounts are evaluated relative to the additional assumptions they have to make, the complications they face and how they fit the observations. Vogel argues in favour of the phonologically oriented OT analysis because of its ability to create a direct link between the coming about of a particular word order pattern and its basically phonological trigger. Each of the three papers recognises the relevance of surface forms in the analysis of German verb movement. They differ, however in the extent to which phonological aspects take part in the explanations they offer.
“The UN Peacebuilding Commission – Lessons from Sierra Leone” by political scientist Andrea Iro is an assessment of the United Nations Peacebuilding Commission (PBC) and the United Nations Peacebuilding Fund (PBF) by analysing their performance over the last two years in Sierra Leone, one of the first PBC focus countries. The paper explores the key question of how the PBC/PBF’s mandate has been translated into operational practice in the field. It concludes that though the overall impact has been mainly positive and welcomed by the country, translating the general mandate into concrete activities remains a real challenge at the country level.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
Like in all countries in transition, the tax as well as the transfer system have been under serious reform pressures. The socialistic systems were not able to fulfill the necessary functions in providing a certain degree of redistribution and social security, which are inevitable for social oriented market economies. Increasing income and wage differentiation is one of the most important prerequisites for a market oriented ability to pay tax system. But in the transformation period, numerous quasi-legal or even illegal property transactions have taken place, thus leading to wealth concentrations on the one hand while as consequence of the bankruptcy of socialism, enormous poverty problems have arisen on the other. For the political acceptance of the transformation process it is of utmost importance that an efficient and fair tax system is implemented and social security is organised by the state on a level which secures at least the physical minimum of subsistence or – if economically possible – even a social-cultural minimum. Whether the state should go further in providing compulsory social insurance systems has been a hotly debated topic for decades even in the welfare and social states of the Western type. Whereas the basic security systems have to be financed by general tax revenue, for a compulsory social insurance system – due to the insurance character – special earmarked social security contribution are held necessary. Both public goods and services as well as at least basic security have to be financed by total tax revenue. For the acceptance and fairness of the whole system the total redistributive effect of both sides of the budget – the tax system as well as the expenditure system – are decisive. In this paper we will concentrate on the revenue side, e.g. on the taxes as well as on the social security contributions. Adam Smith had already formulated some very simple tax norms which have been transformed in modern tax theory. The equivalence as well as the ability-topay principle are basic yardsticks for every tax system in a democratic oriented market system, not to forget tax fairness. In the historical development process equity-oriented measures have often produced an enormous complexity of the single taxes as well as of the whole tax system. Therefore, reconsidering the Smithian principles of simplicity and of minimum compliance costs for the tax payer would even press many Western European tax systems to undergo serious reform processes which often are delayed because of intense interest group influence. Hence, a modern tax system is a simple one which consists only of a few single taxes which are easy to administer. Such a system consists of two main taxes, the income and the value added tax. Consequently in all countries of transition both taxes have been implemented, while the implementation was fostered by the fact that both also constitute the typical components of the EU member states systems. Therefore such a harmonising tax reform is the most important prerequisite to become a membership candidate. Bulgaria also tried to follow this general pattern in reforming the income tax system starting in 1992 and replacing the old socialistic turnover tax and excise duty system by the value added tax (VAT) in 1994. Especially with regard to the income tax system the demand for simplicity has not been met yet. Complex rules to define the tax base as well as a steeply progressive tax schedule have led to behavioral adaptations which are even strengthened by the effects of a high social contribution burden which is predominantly laid on the employers. In the following some concise descriptions of the tax and social contribution system are given; the paper closes with a summary, in which the impacts of the system are evaluated and some political recommendations for further reforms are presented.
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
Of Rawls's two principles of justice only the second has received attention from economists. The second principle is concerned with the social and economic conditions in a just society. The first principle, however, has largely been neglected. It claims, that all people in society should have equal basic liberties. In this paper Rawls's first principle is characterised in a freedom of choice framework. The analysis reveals conceptual problems of the Rawlsian approach to justice.
This master’s thesis examined the internet content regulation in Germany from a perspective of Public-Private Partnerships. In the European Union, there has been a latest trend of initiatives aiming for combating illegal content online under the self-regulatory regime. Yet, concerns of this trend were that transparency cannot be ensured properly to safeguard the freedom of expression, and that the private intermediaries are not able to carry out effective regulation under the non-binding regulatory process. Due to these issues, Germany has legislated the Network Enforcement Act in 2017. This thesis used Mixed Methods within a Case Study Research, in order to identify the PPP type of the NetzDG, and to understand its link on transparency and effectiveness, as well as the relationship of these two dimensions. By taking an Exploratory Sequential Design, the German internet content regulation under the NetzDG was explored to understand its co-regulatory regime and to develop an instrument to measure the aspects of transparency and effectiveness. Then, the three big social media platforms, YouTube, Twitter, and Facebook, were examined according to the developed indicators. This thesis concluded as follow: First, the enactment of the NetzDG brought the shift of the regulatory paradigm from the self-regulatory to the co-regulatory. Yet, the actor-inclusive institutional arrangement of the NetzDG did not successfully result in the actual inclusion of actors in decision-making, but only improved the result transparency in the disclosure of take-down actions. Second, the level of effective regulation was not consistent across the three social media platforms under this regime. Despite these limitations, this study showed that the transparency and the effectiveness of the social media platforms’ implementation gradually improved together, instead of having a negative correlation to one another.
The polit-economic situation in germany : chances for changes in resource and energy economics
(2002)
Contents: Regional Management, Land Use and Energy Production -Biophysical View -First Hypothesis -International and Interregional Cooperation -Second Hypothesis -Partnership with Nature Sustainability and the Agricultural Sector -Traditional Farming -Mono-cultural Bio-industry -Liquid Manure Problems -Clean Drinking Water -Integrated Agro-industrial System -Ecological Farming -Ecotones and Bio-manipulation Regional Economic and Agricultural Policy -New Roles for the Agricultural Sector
Contents: Introduction (The Editors) Basic Notions of Information Structure (Manfred Krifka) Notions of Focus Anaphoricity (Mats Rooth) Topic and Focus: Two Structural Positions Associated with Logical Functions in the Left Periphery of the Hungarian Sentence (Katalin É. Kiss) Direct and Indirect Aboutness Topics (Cornelia Endriss & Stefan Hinterwimmer) Information Structure as Information-based Partition (Satoshi Tomioka) Focus Presuppositions (Dorit Abush) Contrastive Focus, Givenness and the Unmarked Status of “Discourse-new”(Elisabeth O. Selkirk) Contrastive Focus (Malte Zimmermann) The Fallacy of Invariant Phonological Correlates of Information Structural Notions (Caroline Féry) Notions and Subnotions of Information Structure (Carlos Gussenhoven) The Restricted Access of Information Structure to Syntax – A Minority Report (Gisbert Fanselow) Focus and Tone (Katharina Hartmann)
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
This paper reviews theoretical and empirical evidence of asset price movements impact on the real economic activity. A key channel is the wealth effect on consumption. Fluctuations in stock prices and housing prices influence the households wealth and could have important impacts on households consumption. In addition, stock prices may affect corporate sector investments and property prices may affect building activity. Here, the method of cointegration is used to estimate the wealth effect and the investment effect in aggregate time series for Germany after the Reunification in 1990. Moreover, we discuss the role of asset prices in the monetary policy strategy of the ECB.
West of Potsdam’s city center lies the Golm Campus, the largest campus of the University of Potsdam. Its different buildings tell of the numerous institutions that were established at this site over the years: From the mid-1930s, the Walther Wever Barracks were located here. From 1943, it housed the Air Intelligence Division of the German Airforce Supreme Commander. In 1951, a training institution of the Ministry of State Security moved in, which existed until 1989 under different names. In July 1991, the newly founded University of Potsdam took over the premises, which are now part of the Potsdam-Golm Science Park.
The book takes you on a historic journey of the place and invites you to take a walk across today’s campus. The book includes over 110 photos and a detailed map.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
Content: I. The nature and form of international law 1. The acceptance of the existence of an international legal order 2. The legal position of the individual in international law II. Obligations of states in the protection of international human rights 1. Treaty-based human rights obligations 2. The nature of treaty-based human rights obligations 3. The ”absolute” and ”objective” character of human rights treaty obligations 4. Human rights conventions as self-contained regimes 5. The problem of characterisation of human rights obligations of states III. Human rights obligations arising from general principles of international law 1. Obligations erga omnes and human rights norms 2. The outlawing of genocide as obligation erga omnes 3. Protection from slavery as obligation erga omnes 4. The outlawing of acts of aggression as obligation erga omnes 5. Protection from racial discrimination as obligation erga omnes 6. The basic rights of the human person as obligation erga omnes 7. Jus Cogens and the search for peremptory norms of human rights 8. International crimes and human rights norms 9. The relationship between the concepts: erga omnes, jus cogens, international crime and human rights IV. International instruments for the coercive enforcement of state obligations to ‘respect and ensure’ human rights 1. Countermeasures as consequences of breach of treaties in international law 2. Application of reprisals for the enforcement of treaty-based human rights obligations 3. Intervention for the protection of human rights in international law 4. Intervention by the Security Council for the protection of human rights: the situation before the East-West détente 5. Humanitarian intervention after the end of the Cold War 6. The legal nature of ECOWAS intervention in the Liberian Civil War 7. The legality of NATO’s intervention in Kosovo 8. Some instances of intervention with mixed motives V. Non-forceful measures for the enforcement of states’ human rights obligations 1. Economic and financial pressure as means of enforcing states’ obligation to respect and observe human rights 2. The application of the clausula rebus sic stantibus for the protection of human rights 3. The enforcement of human rights through the World Bank 4. The enforcement of human rights through the ILO 5. Diplomatic recognition as an instrument for securing a state's respect and promotion of human rights 6. Refusal to comply with an extradition agreement as a means of enforcing a state’s human rights obligations 7. Denial of immunity as a means of enforcing a state’s human rights obligations 8. Publicity as an instrument for the enforcement of human rights VI. Judicial enforcement of state obligations to ‘respect and ensure’ human rights 1. Enforcement of human rights through International Criminal Tribunals 2. The International Criminal Tribunal for Yugoslavia 3. The International Criminal Tribunal for Rwanda 4. The International Special Court of Sierra Leone Résumé
After promising beginnings towards transformation, in 1991 the Bulgarian economy fell into deep crisis in the period from 1995 to 1997. Social policy, already overstrained due to the demands of transition, was unable to cope effectively with the rapidly spreading state of emergency. The following essay analyses the development of the social indicators and instruments of social security in the years 1990 to 1998. In addition to unemployment and unemployment insurance, the issue of pensions and poverty will also be examined.
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
The aim of the work was to present the results of the analyses economic standing of the partnership companies which lease agricultural real estate from Agricultural Property Agency of State Treasury (APA) in 1996 and 1997. The analyses proved poor economic condition of the firms under investigation and especially their low level of stabilisation (the index of total debt was in 1996 equal to 0.88 and in 1997 to 0.96) and the low level of their solvency.
The development of the Polish telecommunications administration in the years 1989/90 to 2003 is marked by the processes of liberalisation and privatisation the telecommunications sector underwent during that period. The gradual liberalisation of the Polish telecommunications sector started as early as 1992. In the beginning, national strategies were pursued. The most important of these was the creation of a bipolar market structure in the local area networks. In the second half of the 1990ies the approaching EU membership accelerated the process of liberalisation and consequently the development of a framework of regulations. EU standards are more directed towards setting out a legal framework for regulation than prescribing concrete details of administrative organisation. Nevertheless, the independent regulatory agencies typical for Western Europe served as a model for the introduction of a new regulatory body responsible for the telecommunications sector in Poland. The growing influence of EU legislation changed telecommunications policy as well as administrative practices. There has been a shift of responsibilities from the ministry to the regulatory agency, but the question remains, if the agency gained enough power to fulfil its regulatory function. In the following the legislative framework created by the EU in telecommunications policy will be described and the model of independent regulatory agencies, as it is typical for most EU countries, will be introduced. Some categories for the analysis of the Polish regulatory system will be deduced from the discussion on the regulations of telecommunication in the established EU-Nations (see Böllhoff 2002 and 2003, Thatcher 2002a and 2002b, Thatcher/Stone Sweet 2002). Subsequently the basic features of Polish telecommunication policies in the 1990ies and its effects on the telecommunications sector will be outlined. In the third chapter the development of organisational structures on the ministerial level and within the regulatory agency will be examined. In the forth chapter I will look at the distribution of power and the coordination of the various authorities responsible for telecommunication regulations. The focus of this chapter is on the Polish regulatory agency and its relationships with the ministry, with the anti-monopoly office and with the Broadcasting and Television Council. In a conclusion, the main findings will be summed up.
Despite its many challenges and limitations the concept of in situ upgrading of informal settlements has become one of the most favoured approaches to the housing crisis in the ‘Global South’. Due to its inherent principles of incremental in situ development, prevention of relocations, protection of local livelihoods and democratic participation and cooperation, this approach is often perceived to be more sustainable than other housing approaches that often rely on quantitative housing delivery and top down planning methodologies. While this study does not question the benefits of the in situ upgrading approach, it seeks to identify problems of its practical implementation within a specific national and local context. The study discusses the origin and importance of this approach on the basis of a review of international housing policy development and analyses the broader political and social context of the incorporation of this approach into South African housing policy. It further uses insights from a recent case study in Cape Town to determine complications and conflicts that can arise when applying in situ upgrading of informal settlements in a complex local context. On that basis benefits and limitations of the in situ upgrading approach are specified and prerequisites for its successful implementation formulated.
This collection contains 13 papers presented in the workshop on the "The Celtic Languages in Contact" organised by Hildegard L. C. Tristram at the XIII International Celtic Congress in Bonn (Germany), July 23rd - 27th, 2007. The authors of two papers from another section also contributed their papers to this volume, as they deal with closely related issues. The time-span covered ranges from potential pre-historic contacts of Celtic with Altaic languages or Nostratic cognates in Celtic, through the hypothesis of Afro-Asiatic as a possible substrate for Celtic, Latin and Gaulish contacts in Gaul, the impact of Vulgar Latin on the formation of the Insular Celtic Languages as a linguistic area (Sprachbund), to various contact scenarios involving the modern Insular Celtic languages as well as English and French. The final paper reflects on the political status of the modern Insular Celtic languages in the Europe of the 27 EU countries.
What is "Celtic"and what is universal in the "Celtic Englishes"? This was the central concern of the fourth and final Colloquium of studies on language contact between English and the Celtic languages at the University of Potsdam in September 2004. The contributions to this volume discuss the "Celtic" peculiarities of Standard English in England and in Ireland (North and South). They also examine the perceived "Celticity" of personal names in the "Celtic" countries (Ireland, Wales, Cornwall, Brittany). Moreover, they put emphasis on specific grammatical features such as the expression of perfectivity, relativity, intensification and the typological shift of verbal word formation from syntheticity to analycity as well as the emergence of universal contact trends shared by Celtic, African and Indian Englishes. Thus, the choice of contributors and the scope of their articles makes Celtic Englishes IV an invaluable handbook for scholarly work in the field of the English - Celtic relations.
The attractiveness of foreign direct investment in Russia and Ukraine : a statistical analysis
(1999)
In this paper a comparative exploration of the potential for foreign investment and real inflow to Russia and Ukraine are examined. The analysis showed that primarily both countries enjoyed significant comparative advantages in attracting foreign capital. Since the foundation of independent states in 1992 attractiveness began to diverge dramatically. This difference is clearly explained by the determination of the Russian government to reform the economy earlier than the Ukrainian government. The transition to a market economy is closely connected with the development of a favorable investment climate in both countries. It includes the foundation of a stable system of property rights and a conducive legal environment.
The Apache Modeling Project
(2004)
This document presents an introduction to the Apache HTTP Server, covering both an overview and implementation details. It presents results of the Apache Modelling Project done by research assistants and students of the Hasso–Plattner–Institute in 2001, 2002 and 2003. The Apache HTTP Server was used to introduce students to the application of the modeling technique FMC, a method that supports transporting knowledge about complex systems in the domain of information processing (software and hardware as well). After an introduction to HTTP servers in general, we will focus on protocols and web technology. Then we will discuss Apache, its operational environment and its extension capabilities— the module API. Finally we will guide the reader through parts of the Apache source code and explain the most important pieces.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research program on basic human values. The main topic of its first stage was "work" in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research programme on basic human values. The main topic of its second stage was religion in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research programme on basic human values. The main topic of its second stage was family values in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
Graphs are ubiquitous in Computer Science. For this reason, in many areas, it is very important to have the means to express and reason about graph properties. In particular, we want to be able to check automatically if a given graph property is satisfiable. Actually, in most application scenarios it is desirable to be able to explore graphs satisfying the graph property if they exist or even to get a complete and compact overview of the graphs satisfying the graph property.
We show that the tableau-based reasoning method for graph properties as introduced by Lambers and Orejas paves the way for a symbolic model generation algorithm for graph properties. Graph properties are formulated in a dedicated logic making use of graphs and graph morphisms, which is equivalent to firstorder logic on graphs as introduced by Courcelle. Our parallelizable algorithm gradually generates a finite set of so-called symbolic models, where each symbolic model describes a set of finite graphs (i.e., finite models) satisfying the graph property. The set of symbolic models jointly describes all finite models for the graph property (complete) and does not describe any finite graph violating the graph property (sound). Moreover, no symbolic model is already covered by another one (compact). Finally, the algorithm is able to generate from each symbolic model a minimal finite model immediately and allows for an exploration of further finite models. The algorithm is implemented in the new tool AutoGraph.
The value concept of traditional resource economics is welfare. Therefore, sustainability of welfare is often taken to characterise our obligations to future generations. This paper argues that this view is inappropriate because it leaves no room for future generations autonomy. Future generations should be free to make their own decisions. Consequently freedom of choice is the appropriate value concept on which resource economics should be based. The concept of sustainability receives a new interpretation. Sustainability is a principle of intertemporal distributive justice which requires equitable opportunities across generations.
It is predicted that Service-oriented Architectures (SOA) will have a high impact on future electronic business and markets. Services will provide an self-contained and standardised interface towards business and are considered as the future platform for business-to-business and business-toconsumer trades. Founded by the complexity of real world business scenarios a huge need for an easy, flexible and automated creation and enactment of service compositions is observed. This survey explores the relationship of service composition with workflow management—a technology/ concept already in use in many business environments. The similarities between the both and the key differences between them are elaborated. Furthermore methods for composition of services ranging from manual, semi- to full-automated composition are sketched. This survey concludes that current tools for service composition are in an immature state and that there is still much research to do before service composition can be used easily and conveniently in real world scenarios. However, since automated service composition is a key enabler for the full potential of Service-oriented Architectures, further research on this field is imperative. This survey closes with a formal sample scenario presented in appendix A to give the reader an impression on how full-automated service composition works.
IT systems for healthcare are a complex and exciting field. One the one hand, there is a vast number of improvements and work alleviations that computers can bring to everyday healthcare. Some ways of treatment, diagnoses and organisational tasks were even made possible by computer usage in the first place. On the other hand, there are many factors that encumber computer usage and make development of IT systems for healthcare a challenging, sometimes even frustrating task. These factors are not solely technology-related, but just as well social or economical conditions. This report describes some of the idiosyncrasies of IT systems in the healthcare domain, with a special focus on legal regulations, standards and security.
STG decomposition is a promising approach to tackle the complexity problems arising in logic synthesis of speed independent circuits, a robust asynchronous (i.e. clockless) circuit type. Unfortunately, STG decomposition can result in components that in isolation have irreducible CSC conflicts. Generalising earlier work, it is shown how to resolve such conflicts by introducing internal communication between the components via structural techniques only.
This paper investigates the formation of the ownership structure and the corporate governance system of the Ukraine as a country in transition. Numerous studies consider that privatization results in the establishment of a proprietors’ motivation mechanism. On the other hand it causes ownership concentration in the hands of a few shareholders and managers. The goal of economic reform in transition and, largely, its pace, is measured by the degree to which shareholders participate in short- and long-term corporate value creation. Shareholder access to such created value depends on the ability of corporate “insiders”, especially executives and management, to claim a disproportionate share of corporate value (the “insider effect”). An econometric analysis of the correlation between privatization and macroeconomic factors studies the degree of effectiveness of economic reforming in Ukrainian regions.
The paper studies the regional integration as the unique process which depends on the degree of cooperation and interchange among regions. The generalisation of existing approaches for regional integration has been classified by the criterions. The data of the main economic indicators have been analysed. The economic analysis proves the differences in production endowments, the asymmetry in fixed capital investment, the disproportional income, and foreign direct investment distribution in 2001 – 2005 in Ukrainian regions. Econometric modelling depicts the existence of the division for the industrial regions with high urbanisation and backward agrarian regions in the Ukraine, the industrial development disparities among regions; the insufficient infrastructure (telecommunications, roads, hotels, services and etc.), the low labour productivity in industrial sector, and insufficient regional trade.
Business process models are abstractions of concrete operational procedures that occur in the daily business of organizations. To cope with the complexity of these models, business process model abstraction has been introduced recently. Its goal is to derive from a detailed process model several abstract models that provide a high-level understanding of the process. While techniques for constructing abstract models are reported in the literature, little is known about the relationships between process instances and abstract models. In this paper we show how the state of an abstract activity can be calculated from the states of related, detailed process activities as they happen. The approach uses activity state propagation. With state uniqueness and state transition correctness we introduce formal properties that improve the understanding of state propagation. Algorithms to check these properties are devised. Finally, we use behavioral profiles to identify and classify behavioral inconsistencies in abstract process models that might occur, once activity state propagation is used.
Squimera
(2017)
Software development tools that work and behave consistently across different programming languages are helpful for developers, because they do not have to familiarize themselves with new tooling whenever they decide to use a new language. Also, being able to combine multiple programming languages in a program increases reusability, as developers do not have to recreate software frameworks and libraries in the language they develop in and can reuse existing software instead.
However, developers often have a broad choice with regard to tools, some of which are designed for only one specific programming language. Various Integrated Development Environments have support for multiple languages, but are usually unable to provide a consistent programming experience due to different features of language runtimes. Furthermore, common mechanisms that allow reuse of software written in other languages usually use the operating system or a network connection as the abstract layer. Tools, however, often cannot support such indirections well and are therefore less useful in debugging scenarios for example.
In this report, we present a novel approach that aims to improve the programming experience with regard to working with multiple high-level programming languages. As part of this approach, we reuse the tools of a Smalltalk programming environment for other languages and build a multi-language virtual execution environment which is able to provide the same runtime capabilities for all languages.
The prototype system Squimera is an implementation of our approach and demonstrates that it is possible to reuse development tools, so that they behave in the same way across all supported programming languages. In addition, it provides convenient means to reuse and even mix software libraries and frameworks written in different languages without breaking the debugging experience.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
In socialist economies firms have provided various social benefits, like child care, health care, food subsidies, housing etc. Using panel data from Bulgarian and Polish firms, this paper attempts to explain firm-specific provision of social benefits in the process of transition. We investigate empirically with the help of qualitative response models, how ownership type and structure, firm size, profitability, change in management, foreign direct investment, wage and employment policies, union involvement and employee power have impacted the state of non-wage benefits provision.
This study is analysing the transformation of Slovak administration in the telecommunication sector between 1989 and 2004. The dynamic telecom sector forms a good example for the transition problems of post-socialist administration with special regard to the regulation regime change. After describing briefly the role of the telecom sector within economy, the Slovak sectoral policy is analysed. The focus is layed on telecom legislation (including the regulation framework), liberalization of the telecom market and privatisation of the former state owned telecom operator. The transformation of the organizational structure of the "Slovak telecommunication administration" is analysed in particular at the level of the ministry and the regulating agency.
Computer games may be defined as artifacts that connect the input devices of a computer (such as keyboard, mouse or controller) with its output devices (in most cases a screen and speakers) in such a way that on the screen a challenge is displayed. On the screen we see pictorial elements that have to be manipulated to master a game, that is to win a competition, to solve a riddle or to adopt a skill. Therefore the characteristics of the representational function of computer games have to be contrasted phenomenologically with conventional games on the one hand and cinematic depictions on the other. It shows that computer games separate the player from the playing field, and translate bodily felt concrete actions into situational abstract cinematic depictions. These features add up to the situational abstract presentation of self-action experience. In this framework computer games reveal a potential as a new means of shared cognition that might unfold in the 21st century and change the beingin- the-world in a similar way as cinematic depiction did in the 20th century
In centrally planned economies state subsidies were the main instrument of supporting the economic sector. Most of them had also social functions (e.g. through subsidising the consumption of households). In the period of transition, with the withdraw all of the state from economic decisions of the enterprises, new social problems appeared. The paper analyses the process of granting state support to economic units - its scope and forms - in the 90-ties.
Inhalt: Introduction: -Some Introductory Examples -Consumer-relevant Utility Dimensions -Communication Flow between the Relevant Actors -Risk Communication Dimensions -Complete Model -Aims of the Study Method: -Participants -Procedure -Content Analysis Results: -Sample Category 1: Food safety -Sample Category 2: Product Quality -Sample Category 3: Freedom of Choice -Sample Category 4: Decision Power over Foodstuffs -Strategy 1: Scientific Information Approach -Strategy 2: Balanced Information Approach -Strategy 3: Product Information Approach -Strategy 4: Classical Advertising -Strategy 5: Trust me I'm no Baddie -Strategy 6: Induction of Fear
This paper describes the administrative powers of local jurisdictions in Georgia, emphasizing on the tax competences and the abilities to mobilize other sources of income. Having listed and explained the types of revenues and incomes, the articles continues to show their distribution among administrative levels according to the current tax code. Following a brief overview of the main laws underlying tax regulation, the existing problems of the status quo before 2007 and some perspectives for the immediate future are outlined.
This paper tries to apply common methods to estimate unbiased coefficients for the return to schooling in Germany for the year 2004. Based on the simple Mincer-type wage equation, the return to schooling is around 9.5% per year. There is no sheepskin effect. As expected the return in the private sector is higher than in the public sector. Females have a higher return than males, but there are no differences between East and West Germans. An Instrumental Variables and a 3-Stage-Least-Squares approach give very high returns. For correcting the sample selection, the Heckman Two Step Procedure and the Heckman Maximum Likelihood Approach are used. For both methods, the coefficients are very similar, but higher than without correcting for it.
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
A doppelalgebra is an algebra defined on a vector space with two binary linear associative operations. Doppelalgebras play a prominent role in algebraic K-theory. We consider doppelsemigroups, that is, sets with two binary associative operations satisfying the axioms of a doppelalgebra. Doppelsemigroups are a generalization of semigroups and they have relationships with such algebraic structures as interassociative semigroups, restrictive bisemigroups, dimonoids, and trioids.
In the lecture notes numerous examples of doppelsemigroups and of strong doppelsemigroups are given. The independence of axioms of a strong doppelsemigroup is established. A free product in the variety of doppelsemigroups is presented. We also construct a free (strong) doppelsemigroup, a free commutative (strong) doppelsemigroup, a free n-nilpotent (strong) doppelsemigroup, a free n-dinilpotent (strong) doppelsemigroup, and a free left n-dinilpotent doppelsemigroup. Moreover, the least commutative congruence, the least n-nilpotent congruence, the least n-dinilpotent congruence on a free (strong) doppelsemigroup and the least left n-dinilpotent congruence on a free doppelsemigroup are characterized.
The book addresses graduate students, post-graduate students, researchers in algebra and interested readers.
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
This is the 15th issue of the working paper series Interdisciplinary Studies on Information Structure (ISIS) of the Sonderforschungsbereich (SFB) 632. This online version contains the Questionnaire on Focus Semantics contributed by Agata Renans, Malte Zimmermann and Markus Greif, members of Project D2 investigating information structural phenomena from a typological perspective. The present issue provides a tool for collecting and analyzing natural data with respect to relevant linguistic questions concerning focus types, focus sensitive particles, and the effects of quantificational adverbs and presupposition on focus semantics. This volume is a supplementation to the Reference manual of the Questionnaire on Information Structure, issued by Project D2 in ISIS 4 (2006).
This is the 15th issue of the working paper series Interdisciplinary Studies on Information Structure (ISIS) of the Sonderforschungsbereich (SFB) 632. This online version contains the Questionnaire on Focus Semantics contributed by Agata Renans, Malte Zimmermann and Markus Greif, members of Project D2 investigating information structural phenomena from a typological perspective. The present issue provides a tool for collecting and analyzing natural data with respect to relevant linguistic questions concerning focus types, focus sensitive particles, and the effects of quantificational adverbs and presupposition on focus semantics. This volume is a supplementation to the Reference manual of the Questionnaire on Information Structure, issued by Project D2 in ISIS 4 (2006).
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
The study presents estimates and analyses of the social expenditure in Poland. Changes which occurred during the transformation period are a reflection of consciously launched political transformations as well as decisions taken as a result of current needs and political pressures. This has an impact on the volume and structure of expenditures which are under consolidation. The debate devoted to budget issues, which gets more intense every autumn, testifies to increasing problems with correcting guidelines for distribution of expenditures. Even slight changes stand for depriving a specified group of transfers, what in democratic conditions produces strong protests. A similar negative attitude to changes became evident with regard to taxation. Recommendations presented in 1998 by the Polish government [see Ministry of Finance, 1998a, 1998b] introduce substantial modifications to the current tax system (withdrawal from tax exemptions and introduction of a tax-free minimum income) and thus met with a massive reluctance of major political fractions. This study provides readers with information on the volume of public expenditures, the source of public revenue, that is taxes, and a thorough study on expenditures allocated to social goals. The analysis was carried out on the basis of own estimates, which employ data acquired from the Ministry of Finance and the Ministry of Labour and Social Policy.
The paper is an enquiry into dynamic social contract theory. The social contract defines the rules of resource use. An intergenerational social contract in an economy with a single exhaustible resource is examined within a framework of an overlapping generations model. It is assumed that new generations do not accept the old social contract, and access to resources will be renegotiated between any incumbent generation and their successors. It turns out that later generations will be in an unfortunate position regardless of their bargaining power.
The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018.
The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan.
A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic “Operating the Cloud”. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Hence, HPI’s Future SOC Lab is the adequate environment to host this event which is also supported by BITKOM.
On the occasion of this workshop we called for submissions of research papers and practitioner’s reports. ”Operating the Cloud” aims to be a platform for productive discussions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In this workshop proceedings the results of the third HPI cloud symposium ”Operating the Cloud” 2015 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2016.
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
Traditionally, business process management systems only execute and monitor business process instances based on events that originate from the process engine itself or from connected client applications. However, environmental events may also influence business process execution. Recent research shows how the technological improvements in both areas, business process management and complex event processing, can be combined and harmonized. The series of technical reports included in this collection provides insights in that combination with respect to technical feasibility and improvements based on real-world use cases originating from the EU-funded GET Service project – a project targeting transport optimization and green-house gas reduction in the logistics domain. Each report is complemented by a working prototype.
This collection introduces six use cases from the logistics domain. Multiple transports – each being a single process instance – may be affected by the same events at the same point in time because of (partly) using the same transportation route, transportation vehicle or transportation mode (e.g. containers from multiple process instances on the same ship) such that these instances can be (partly) treated as batch. Thus, the first use case shows the influence of events to process instances processed in a batch. The case of sharing the entire route may be, for instance, due to origin from the same business process (e.g. transport three containers, where each is treated as single process instance because of being transported on three trucks) resulting in multi-instance process executions. The second use case shows how to handle monitoring and progress calculation in this context. Crucial to transportation processes are frequent changes of deadlines. The third use case shows how to deal with such frequent process changes in terms of propagating the changes along and beyond the process scope to identify probable deadline violations. While monitoring transport processes, disruptions may be detected which introduce some delay. Use case four shows how to propagate such delay in a non-linear fashion along the process instance to predict the end time of the instance. Non-linearity is crucial in logistics because of buffer times and missed connection on intermodal transports (a one-hour delay may result in a missed ship which is not going every hour). Finally, use cases five and six show the utilization of location-based process monitoring. Use case five enriches transport processes with real-time route and traffic event information to improve monitoring and planning capabilities. Use case six shows the inclusion of spatio-temporal events on the example of unexpected weather events.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
On the occasion of this symposium we called for submissions of research papers and practitioner's reports. A compilation of the research papers realized during the fourth HPI cloud symposium "Operating the Cloud" 2016 are published in this proceedings. We thank the authors for exciting presentations and insights into their current work and research.
Moreover, we look forward to more interesting submissions for the upcoming symposium later in the year. Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Operating the Cloud. Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI’s Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. Operating the Cloud aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
In these proceedings, the results of the fifth HPI cloud symposium Operating the Cloud 2017 are published. We thank the authors for exciting presentations and insights into their current work and research. Moreover, we look forward to more interesting submissions for the upcoming symposium in 2018.
In Kooperation mit Partnern aus der Industrie etabliert das Hasso-Plattner-Institut (HPI) ein “HPI Future SOC Lab”, das eine komplette Infrastruktur von hochkomplexen on-demand Systemen auf neuester, am Markt noch nicht verfügbarer, massiv paralleler (multi-/many-core) Hardware mit enormen Hauptspeicherkapazitäten und dafür konzipierte Software bereitstellt. Das HPI Future SOC Lab verfügt über prototypische 4- und 8-way Intel 64-Bit Serversysteme von Fujitsu und Hewlett-Packard mit 32- bzw. 64-Cores und 1 - 2 TB Hauptspeicher. Es kommen weiterhin hochperformante Speichersysteme von EMC² sowie Virtualisierungslösungen von VMware zum Einsatz. SAP stellt ihre neueste Business by Design (ByD) Software zur Verfügung und auch komplexe reale Unternehmensdaten stehen zur Verfügung, auf die für Forschungszwecke zugegriffen werden kann. Interessierte Wissenschaftler aus universitären und außeruniversitären Forschungsinstitutionen können im HPI Future SOC Lab zukünftige hoch-komplexe IT-Systeme untersuchen, neue Ideen / Datenstrukturen / Algorithmen entwickeln und bis hin zur praktischen Erprobung verfolgen. Dieser Technische Bericht stellt erste Ergebnisse der im Rahmen der Eröffnung des Future SOC Labs im Juni 2010 gestarteten Forschungsprojekte vor. Ausgewählte Projekte stellten ihre Ergebnisse am 27. Oktober 2010 im Rahmen der Future SOC Lab Tag Veranstaltung vor.
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize. Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns. Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
Design and implementation of service-oriented architectures impose numerous research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Service-oriented Systems Engineering represents a symbiosis of best practices in object orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. Service-oriented Systems Engineering denotes a current research topic in the field of IT-Systems Engineering with high potential in academic research and industrial application.
The annual Ph.D. Retreat of the Research School provides all members the opportunity to present the current state of their research and to give an outline of prospective Ph.D. projects. Due to the interdisciplinary structure of the Research School, this technical report covers a wide range of research topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application. Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services. Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns. The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the Research Scholl, this technical report covers a wide range of research topics. These include but are not limited to: Self-Adaptive Service-Oriented Systems, Operating System Support for Service-Oriented Systems, Architecture and Modeling of Service-Oriented Systems, Adaptive Process Management, Services Composition and Workflow Planning, Security Engineering of Service-Based IT Systems, Quantitative Analysis and Optimization of Service-Oriented Systems, Service-Oriented Systems in 3D Computer Graphics sowie Service-Oriented Geoinformatics.
In continuation of a successful series of events, the 4th Many-core Applications Research Community (MARC) symposium took place at the HPI in Potsdam on December 8th and 9th 2011. Over 60 researchers from different fields presented their work on many-core hardware architectures, their programming models, and the resulting research questions for the upcoming generation of heterogeneous parallel systems.