Refine
Year of publication
- 2022 (28) (remove)
Document Type
- Monograph/Edited Volume (28) (remove)
Language
- English (28) (remove)
Is part of the Bibliography
- yes (28)
Keywords
- Agile (1)
- Agilität (1)
- Aktivitäten (1)
- Alexander von Humboldt (1)
- Alte Geschichte (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Ancient Rome (1)
- Angola (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (10)
- Fachgruppe Politik- & Verwaltungswissenschaft (4)
- Institut für Geowissenschaften (2)
- Sozialwissenschaften (2)
- Extern (1)
- Fachgruppe Betriebswirtschaftslehre (1)
- Historisches Institut (1)
- Institut für Germanistik (1)
- Institut für Romanistik (1)
- Klassische Philologie (1)
Faced with an accelerating climate crisis caused by burning fossil fuels we have to change the way the economy works. We can no longer go on with a system that just maximises private profit without consideration for its effects. Instead we have to conciously plan how to change to a fossil fuel free society.
The need is urgent.
The transformation will be vast.
Nothing similar has been done in the West since the days of wartime mobilisation.
This book explains the basic science of climate change before looking at the transformations needed to our energy and basic industries. It looks at the previous successful history of deliberate planning practiced in the UK from 1939 to the 1960s and how, using modern computing techniques it will be possible to organise resources so as to effect the change.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Modeling and Formal Analysis of Meta-Ecosystems with Dynamic Structure using Graph Transformation
(2022)
The dynamics of ecosystems is of crucial importance. Various model-based approaches exist to understand and analyze their internal effects. In this paper, we model the space structure dynamics and ecological dynamics of meta-ecosystems using the formal technique of Graph Transformation (short GT). We build GT models to describe how a meta-ecosystem (modeled as a graph) can evolve over time (modeled by GT rules) and to analyze these GT models with respect to qualitative properties such as the existence of structural stabilities. As a case study, we build three GT models describing the space structure dynamics and ecological dynamics of three different savanna meta-ecosystems. The first GT model considers a savanna meta-ecosystem that is limited in space to two ecosystem patches, whereas the other two GT models consider two savanna meta-ecosystems that are unlimited in the number of ecosystem patches and only differ in one GT rule describing how the space structure of the meta-ecosystem grows. In the first two GT models, the space structure dynamics and ecological dynamics of the meta-ecosystem shows two main structural stabilities: the first one based on grassland-savanna-woodland transitions and the second one based on grassland-desert transitions. The transition between these two structural stabilities is driven by high-intensity fires affecting the tree components. In the third GT model, the GT rule for savanna regeneration induces desertification and therefore a collapse of the meta-ecosystem. We believe that GT models provide a complementary avenue to that of existing approaches to rigorously study ecological phenomena.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
These days design thinking is no longer a “new approach”. Among practitioners, as well as academics, interest in the topic has gathered pace over the last two decades. However, opinions are divided over the longevity of the phenomenon: whether design thinking is merely “old wine in new bottles,” a passing trend, or still evolving as it is being spread to an increasing number of organizations and industries. Despite its growing relevance and the diffusion of design thinking, knowledge on the actual status quo in organizations remains scarce. With a new study, the research team of Prof. Uebernickel and Stefanie Gerken investigates temporal developments and changes in design thinking practices in organizations over the past six years comparing the results of the 2015 “Parts without a whole” study with current practices and future developments. Companies of all sizes and from different parts of the world participated in the survey. The findings from qualitative interviews with experts, i.e., people who have years of knowledge with design thinking, were cross-checked with the results from an exploratory analysis of the survey data. This analysis uncovers significant variances and similarities in how design thinking is interpreted and applied in businesses.
The business problem of having inefficient processes, imprecise process analyses and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS) and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
This book compares local self-government in Europe. It examines local institutional structures, autonomy, and capacities in six selected countries - France, Italy, Sweden, Hungary, Poland, and the United Kingdom - each of which represents a typical model of European local government. Within Europe, an overall trend towards more local government capacities and autonomy can be identified, but there are also some counter tendencies to this trend and major differences regarding local politico-administrative settings, functional responsibilities, and resources. The book demonstrates that a certain degree of local financial autonomy and fiscal discretion is necessary for effective service provision. Furthermore, a robust local organization, viable territorial structures, a professional public service, strong local leadership, and well-functioning tools of democratic participation are key aspects for local governments to effectively fulfill their tasks and ensure political accountability. The book will appeal to students and scholars of Public Administration and Public Management, as well as practitioners and policy-makers at different levels of government, in public enterprises, and in NGOs.
openHPI
(2022)
On the occasion of the 10th openHPI anniversary, this technical report provides information about the HPI MOOC platform, including its core features, technology, and architecture.
In an introduction, the platform family with all partner platforms is presented; these now amount to nine platforms, including openHPI. This section introduces openHPI as an advisor and research partner in various projects.
In the second chapter, the functionalities and common course formats of the platform are presented. The functionalities are divided into learner and admin features. The learner features section provides detailed information about performance records, courses, and the learning materials of which a course is composed: videos, texts, and quizzes. In addition, the learning materials can be enriched by adding external exercise tools that communicate with the HPI MOOC platform via the Learning Tools Interoperability (LTI) standard. Furthermore, the concept of peer assessments completed the possible learning materials.
The section then proceeds with further information on the discussion forum, a fundamental concept of MOOCs compared to traditional e-learning offers. The section is concluded with a description of the quiz recap, learning objectives, mobile applications, gameful learning, and the help desk.
The next part of this chapter deals with the admin features. The described functionality is restricted to describing the news and announcements, dashboards and statistics, reporting capabilities, research options with A/B testing, the course feed, and the TransPipe tool to support the process of creating automated or manual subtitles. The platform supports a large variety of additional features, but a detailed description of these features goes beyond the scope of this report.
The chapter then elaborates on common course formats and openHPI teaching activities at the HPI. The chapter concludes with some best practices for course design and delivery.
The third chapter provides insights into the technology and architecture behind openHPI. A special characteristic of the openHPI project is the conscious decision to operate the complete application from bare metal to platform development. Hence, the chapter starts with a section about the openHPI Cloud, including detailed information about the data center and devices, the used cloud software OpenStack and Ceph, as well as the openHPI Cloud Service provided for the HPI.
Afterward, a section on the application technology stack and development tooling describes the application infrastructure components, the used automation, the deployment pipeline, and the tools used for monitoring and alerting. The chapter is concluded with detailed information about the technology stack and concrete platform implementation details. The section describes the service-oriented Ruby on Rails application, inter-service communication, and public APIs. It also provides more information on the design system and components used in the application. The section concludes with a discussion of the original microservice architecture, where we share our insights and reasoning for migrating back to a monolithic application.
The last chapter provides a summary and an outlook on the future of digital education.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2018. Selected projects have presented their results on April 17th and November 14th 2017 at the Future SOC Lab Day events.
This open access book is about Mozambicans and Angolans who migrated in state-sponsored schemes to East Germany in the late 1970s and throughout the 1980s. They went to work and to be trained as a vanguard labor force for the intended African industrial revolutions. While they were there, they contributed their labor power to the East German economy. This book draws on more than 260 life history interviews and uncovers complex and contradictory experiences and transnational encounters. What emerges is a series of dualities that exist side by side in the memories of the former migrants: the state and the individual, work and consumption, integration and exclusion, loss and gain, and the past in the past and the past in the present and future. By uncovering these dualities, the book explores the lives of African migrants moving between the Third and Second worlds. Devoted to the memories of worker-trainees, this transnational study comes at a time when historians are uncovering the many varied, complicated, and important connections within the global socialist world.
The analysis of behavioral models such as Graph Transformation Systems (GTSs) is of central importance in model-driven engineering. However, GTSs often result in intractably large or even infinite state spaces and may be equipped with multiple or even infinitely many start graphs. To mitigate these problems, static analysis techniques based on finite symbolic representations of sets of states or paths thereof have been devised. We focus on the technique of k-induction for establishing invariants specified using graph conditions. To this end, k-induction generates symbolic paths backwards from a symbolic state representing a violation of a candidate invariant to gather information on how that violation could have been reached possibly obtaining contradictions to assumed invariants. However, GTSs where multiple agents regularly perform actions independently from each other cannot be analyzed using this technique as of now as the independence among backward steps may prevent the gathering of relevant knowledge altogether.
In this paper, we extend k-induction to GTSs with multiple agents thereby supporting a wide range of additional GTSs. As a running example, we consider an unbounded number of shuttles driving on a large-scale track topology, which adjust their velocity to speed limits to avoid derailing. As central contribution, we develop pruning techniques based on causality and independence among backward steps and verify that k-induction remains sound under this adaptation as well as terminates in cases where it did not terminate before.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time.
In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Global Legitimacy Crises
(2022)
Global Legitimacy Crises addresses the consequences of legitimacy in global governance, in particular asking: when and how do legitimacy crises affect international organizations and their capacity to rule. The book starts with a new conceptualization of legitimacy crisis that looks at public challenges from a variety of actors. Based on this conceptualization, it applies a mixed-methods approach to identify and examine legitimacy crises, starting with a quantitative analysis of mass media data on challenges of a sample of 32 IOs. It shows that some, but not all organizations have experienced legitimacy crises, spread over several decades from 1985 to 2020. Following this, the book presents a qualitative study to further examine legitimacy crises of two selected case studies: the WTO and the UNFCCC. Whereas earlier research assumed that legitimacy crises have negative consequences, the book introduces a theoretical framework that privileges the activation inherent in a legitimacy crisis. It holds that this activation may not only harm an IO, but could also strengthen it, in terms of its material, institutional, and decision-making capacity. The following statistical analysis shows that whether a crisis has predominantly negative or positive effects depends on a variety of factors. These include the specific audience whose challenges define a certain crisis, and several institutional properties of the targeted organization. The ensuing in-depth analysis of the WTO and the UNFCCC further reveals how legitimacy crises and both positive and negative consequences are interlinked, and that effects of crises are sometimes even visible beyond the organizational borders.
Python is used in a wide range of geoscientific applications, such as in processing images for remote sensing, in generating and processing digital elevation models, and in analyzing time series. This book introduces methods of data analysis in the geosciences using Python that include basic statistics for univariate, bivariate, and multivariate data sets, time series analysis, and signal processing; the analysis of spatial and directional data; and image analysis. The text includes numerous examples that demonstrate how Python can be used on data sets from the earth sciences. The supplementary electronic material (available online through Springer Link) contains the example data as well as recipes that include all the Python commands featured in the book.
Transitional Justice
(2022)
This publication deals with the topic of transitional justice. In six case studies, the authors link theoretical and practical implications in order to develop some innovative approaches. Their proposals might help to deal more effectively with the transition of societies, legal orders and political systems.
Young academics from various backgrounds provide fresh insights and demonstrate the relevance of the topic. The chapters analyse transitions and conflicts in Sierra Leone, Argentina, Nicaragua, Nepal, and South Sudan as well as Germany’s colonial genocide in Namibia. Thus, the book provides the reader with new insights and contributes to the ongoing debate about transitional justice.
Linguistic hybridity
(2022)
This volume deals with different linguistic phenomena representing grammaticalization and lexicalization processes and combines different approaches of contact linguistics and variational linguistics. It contains papers on clitic placement in Angolan Portuguese, on the use of subject pronouns in French, Brazilian Portuguese and Caribbean Spanish, on evidential marking in Paraguayan Spanish, on Paraguayan Guaraní, on evidentiality in different varieties of Spanish, on discourse markers in Latin America, on modal particles in Italian and their translation into German, on bilingual communities in Southern Brazil, on Spanish-German-Russian language contact and on Romance aspectual periphrases in contact with English progressives.
Vienna
(2022)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
Culture and law
(2022)
Die Beantwortung der rechtlichen Fragen, die mit dem Lebensende zusammenhängen, unterliegt starken kulturellen und religiösen Einflüssen. Zu berücksichtigen sind zudem medizinische, philosophische und historische Aspekte. Wegen der engen Verbindung von Recht und Kultur wurden Länder mit unterschiedlichen kulturellen und religiösen Hintergründen für eine vergleichende Studie zu Fragen des Lebensendes ausgewählt. In Frankreich, Deutschland und der Schweiz mit einem kontinentalen Rechtssystem, in Großbritannien mit einem common law System, in Indien und Japan üben die verschiedenen Religionen und Kulturen einen wichtigen Einfluss auf die Modernisierung der einschlägigen Rechtsvorschriften aus. Das Buch behandelt die jüngsten Gesetzesänderungen und die Entwicklungen in den in die Untersuchung einbezogenen Länder.