Refine
Has Fulltext
- yes (247) (remove)
Year of publication
Document Type
- Monograph/Edited Volume (247) (remove)
Language
- English (247) (remove)
Keywords
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (87)
- Wirtschaftswissenschaften (48)
- Hasso-Plattner-Institut für Digital Engineering GmbH (34)
- Department Linguistik (16)
- Extern (16)
- Sozialwissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Institut für Mathematik (7)
- Sonderforschungsbereich 632 - Informationsstruktur (6)
- Department Psychologie (4)
The analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of probabilistic timed graph transformation systems is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces.
We present an approach for the analysis of large scale systems modeled as probabilistic timed graph transformation systems by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large scale system under analysis. We consider a running example in which we model shuttles driving on tracks of a large scale topology and for which we verify that shuttles never collide and are unlikely to execute emergency brakes. In our evaluation, we apply an implementation of our approach to the running example.
Transitional Justice
(2022)
This publication deals with the topic of transitional justice. In six case studies, the authors link theoretical and practical implications in order to develop some innovative approaches. Their proposals might help to deal more effectively with the transition of societies, legal orders and political systems.
Young academics from various backgrounds provide fresh insights and demonstrate the relevance of the topic. The chapters analyse transitions and conflicts in Sierra Leone, Argentina, Nicaragua, Nepal, and South Sudan as well as Germany’s colonial genocide in Namibia. Thus, the book provides the reader with new insights and contributes to the ongoing debate about transitional justice.
The formal modeling and analysis is of crucial importance for software development processes following the model based approach. We present the formalism of Interval Probabilistic Timed Graph Transformation Systems (IPTGTSs) as a high-level modeling language. This language supports structure dynamics (based on graph transformation), timed behavior (based on clocks, guards, resets, and invariants as in Timed Automata (TA)), and interval probabilistic behavior (based on Discrete Interval Probability Distributions). That is, for the probabilistic behavior, the modeler using IPTGTSs does not need to provide precise probabilities, which are often impossible to obtain, but rather provides a probability range instead from which a precise probability is chosen nondeterministically. In fact, this feature on capturing probabilistic behavior distinguishes IPTGTSs from Probabilistic Timed Graph Transformation Systems (PTGTSs) presented earlier.
Following earlier work on Interval Probabilistic Timed Automata (IPTA) and PTGTSs, we also provide an analysis tool chain for IPTGTSs based on inter-formalism transformations. In particular, we provide in our tool AutoGraph a translation of IPTGTSs to IPTA and rely on a mapping of IPTA to Probabilistic Timed Automata (PTA) to allow for the usage of the Prism model checker. The tool Prism can then be used to analyze the resulting PTA w.r.t. probabilistic real-time queries asking for worst-case and best-case probabilities to reach a certain set of target states in a given amount of time.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Crochet is a popular handcraft all over the world. While other techniques such as knitting or weaving have received technical support over the years through machines, crochet is still a purely manual craft. Not just the act of crochet itself is manual but also the process of creating instructions for new crochet patterns, which is barely supported by domain specific digital solutions. This leads to unstructured and often also ambiguous and erroneous pattern instructions. In this report, we propose a concept to digitally represent crochet patterns. This format incorporates crochet techniques which allows domain specific support for crochet pattern designers during the pattern creation and instruction writing process. As contributions, we present a thorough domain analysis, the concept of a graph structure used as domain specific language to specify crochet patterns and a prototype of a projectional editor using the graph as representation format of patterns and a diagramming system to visualize them in 2D and 3D. By analyzing the domain, we learned about crochet techniques and pain points of designers in their pattern creation workflow. These insights are the basis on which we defined the pattern representation. In order to evaluate our concept, we built a prototype by which the feasibility of the concept is shown and we tested the software with professional crochet designers who approved of the concept.
This master’s thesis examined the internet content regulation in Germany from a perspective of Public-Private Partnerships. In the European Union, there has been a latest trend of initiatives aiming for combating illegal content online under the self-regulatory regime. Yet, concerns of this trend were that transparency cannot be ensured properly to safeguard the freedom of expression, and that the private intermediaries are not able to carry out effective regulation under the non-binding regulatory process. Due to these issues, Germany has legislated the Network Enforcement Act in 2017. This thesis used Mixed Methods within a Case Study Research, in order to identify the PPP type of the NetzDG, and to understand its link on transparency and effectiveness, as well as the relationship of these two dimensions. By taking an Exploratory Sequential Design, the German internet content regulation under the NetzDG was explored to understand its co-regulatory regime and to develop an instrument to measure the aspects of transparency and effectiveness. Then, the three big social media platforms, YouTube, Twitter, and Facebook, were examined according to the developed indicators. This thesis concluded as follow: First, the enactment of the NetzDG brought the shift of the regulatory paradigm from the self-regulatory to the co-regulatory. Yet, the actor-inclusive institutional arrangement of the NetzDG did not successfully result in the actual inclusion of actors in decision-making, but only improved the result transparency in the disclosure of take-down actions. Second, the level of effective regulation was not consistent across the three social media platforms under this regime. Despite these limitations, this study showed that the transparency and the effectiveness of the social media platforms’ implementation gradually improved together, instead of having a negative correlation to one another.
Multipliers of Change
(2020)
Higher Education Leadership and Management have become increasingly important throughout the years due to the complexities that have to be addressed by universities worldwide. This can be seen not only in professionalisation in fields such as faculty management or in areas of quality assurance and internationalisation, but also in the need for exchange and training in academic leadership, such as that of deans or study deans, or of university leadership in general.
The Dialogue on Innovative Higher Education Strategies (DIES) is addressing this need in emerging countries by building platforms of exchange and offering training courses. Not only is the programme supporting capacity building of human resources, but it is also specifically focusing on inducing change within the universities, such as introducing new instruments or tools in the area of quality assurance and internationalisation, and addressing specific challenges or setting up new structures in the form of projects in the frame of the training. The ‘National Multiplication Trainings’ Programme under DIES is further addressing the sustainability and multiplication of the DIES Programme, that is, alumni are enabled to implement capacity building in higher education leadership and management in their national context.
The articles within this volume of the “Potsdamer Beiträge zur Hochschulforschung” (Potsdam Contributions to Higher Education Research) analyse and share the experiences of such training programmes held in Colombia, Democratic Republic of Congo, Guinea, Malaysia, Kenya, and Uganda. They all revolve around the best ways to address the needs and challenges in higher education leadership and management, and in building capacities in these areas.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
This book features four essays that illuminate the relationship between American and Soviet film cultures in the 20th century.
The first essay emphasizes the structural similarities and dissimilarities of the two cultures. Both wanted to reach the masses. However, the goal in Hollywood was to entertain (and educate a little) and in Moscow to educate (and entertain a little).
Some films in the Soviet Union as well as in the United States were conceived as clear competition to one another – as the second essay demonstrates – and the ideological opponent was not shown from its most advantageous side.
The third essay shows how, in the 1980s, the different film cultures made it difficult for the Soviet director Andrei Konchalovsky to establish himself in the US, but nevertheless allowed him to succeed.
In the 1960s, a genre became popular that tells the story of the Russian Civil War using stylistic features of the Western: The Eastern. Its rise and decline are analyzed in the fourth essay.