Nicht ermittelbar
Refine
Year of publication
Document Type
- Article (99)
- Monograph/Edited Volume (97)
- Part of a Book (65)
- Conference Proceeding (36)
- Other (12)
- Doctoral Thesis (9)
- Review (8)
- Working Paper (6)
- Report (5)
- Postprint (2)
Language
- English (339) (remove)
Is part of the Bibliography
- yes (339) (remove)
Keywords
Institute
- Fachgruppe Betriebswirtschaftslehre (58)
- Institut für Mathematik (46)
- Fachgruppe Politik- & Verwaltungswissenschaft (44)
- Hasso-Plattner-Institut für Digital Engineering GmbH (23)
- Institut für Informatik und Computational Science (23)
- Institut für Anglistik und Amerikanistik (18)
- Öffentliches Recht (18)
- Wirtschaftswissenschaften (16)
- Institut für Physik und Astronomie (12)
- Department Psychologie (9)
Does personality matter? Is an individual who is open to experience more or less likely to become an entrepreneur? Is it better to score low or high in agreeableness for surviving as an entrepreneur? To the extent that personality captures one part of entrepreneurial abilities, which are usually unobservable, the analysis of traits and personality characteristics helps better understanding such abilities. This chapter reviews research on the relationship between personality and entrepreneurship since 2000 and shows that possessing certain personality characteristics will make it more likely that an individual will start an own business and hire staff. More specifically, with respect to the entry decision, research finds that nearly all so-called Big Five factors as well as several specific personality characteristics influence the entry probability into entrepreneurship. Further, entrepreneurs are more likely to hire, the higher they score in risk tolerance, trust, openness to experience, and conscientiousness. However, different factors such as low scores in agreeableness, the only Big Factor that does not affect entrepreneurial entry, influence entrepreneurial survival. And for some of characteristics that influence entrepreneurial entry, like high scores in the factor openness for experience or in risk tolerance, “revolving door effects” are found, explaining why some entrepreneurs subsequently exit again the market.
Fighting false information
(2023)
The digital transformation poses challenges for public sector organizations (PSOs) such as the dissemination of false information in social media which can cause uncertainty among citizens and decrease trust in the public sector. Some PSOs already successfully deploy conversational agents (CAs) to communicate with citizens and support digital service delivery. In this paper, we used design science research (DSR) to examine how CAs could be designed to assist PSOs in fighting false information online. We conducted a workshop with the municipality of Kristiansand, Norway to define objectives that a CA would have to meet for addressing the identified false information challenges. A prototypical CA was developed and evaluated in two iterations with the municipality and students from Norway. This research-in-progress paper presents findings and next steps of the DSR process. This research contributes to advancing the digital transformation of the public sector in combating false information problems.
Living in a world of plenty?
(2020)
Inequality in the distribution of economic wealth within populations has been rising steadily over the past century, having reached unprecedented highs in many Western societies. However, this development is not reflected in people’s perceptions of wealth inequality, as the public tends to underestimate it. Research suggests that inequality estimates are derived from personal reference groups, which, as we propose, are expanded by social network site (SNS) use. As content on SNSs frequently revolves around events of consumption, signaling enhanced overall population wealth, this study tests the hypothesis that SNS use distorts inequality perceptions downward, i.e., increases the perception of societal equality. Responses of 534 survey participants in the United States confirm that SNS use negatively predicts perceived inequality. The relationship is stronger the more SNS users perceive the content they encounter online as real, supporting the assumption that observing other people’s behavior online lowers estimates of nationwide wealth inequality. These findings provide novel insights on inequality misperceptions by suggesting individuals’ SNS use as a new predictor of perceived wealth inequality.
This technical report presents the results of student projects which were prepared during the lecture “Operating Systems II” offered by the “Operating Systems and Middleware” group at HPI in the Summer term of 2020. The lecture covered ad- vanced aspects of operating system implementation and architecture on topics such as Virtualization, File Systems and Input/Output Systems. In addition to attending the lecture, the participating students were encouraged to gather practical experience by completing a project on a closely related topic over the course of the semester. The results of 10 selected exceptional projects are covered in this report.
The students have completed hands-on projects on the topics of Operating System Design Concepts and Implementation, Hardware/Software Co-Design, Reverse Engineering, Quantum Computing, Static Source-Code Analysis, Operating Systems History, Application Binary Formats and more. It should be recognized that over the course of the semester all of these projects have achieved outstanding results which went far beyond the scope and the expec- tations of the lecture, and we would like to thank all participating students for their commitment and their effort in completing their respective projects, as well as their work on compiling this report.
In the last two centuries BC, with the Republic limping towards its end, the cultivated ruling elite began to lose its moral and political authority.1 Its members not only held themselves responsible for the so-called crisis of tradition, but at the same time also conveyed the impression of a loss of memory, as if all Romans were suffering from some kind of amnesia or identity crisis.2 In particular, institutional figures such as pontiffs and augurs, who had preserved Rome’s memory throughout its history, were accused of neglecting their duties and, by extension, of allowing ancient practices and values to slowly disappear.3 Accordingly, Cicero and Varro, both perfect representatives of this elite, employed recurrent terms such as neglect (neglegentia/neglegere), involuntary abandon (amittere), oblivion (oblivio), vanishing of institutions (evanescere), and ignorance (ignoratio/ignorare) to describe this critical loss of information; they depicted the citizenry of Rome (civitas) as disoriented and estranged, incapable of sharing any common knowledge or values.
Digital technology offers significant political, economic, and societal opportunities. At the same time, the notion of digital sovereignty has become a leitmotif in German discourse: the state’s capacity to assume its responsibilities and safeguard society’s – and individuals’ – ability to shape the digital transformation in a self-determined way. The education sector is exemplary for the challenge faced by Germany, and indeed Europe, of harnessing the benefits of digital technology while navigating concerns around sovereignty. It encompasses education as a core public good, a rapidly growing field of business, and growing pools of highly sensitive personal data. The report describes pathways to mitigating the tension between digitalization and sovereignty at three different levels – state, economy, and individual – through the lens of concrete technical projects in the education sector: the HPI Schul-Cloud (state sovereignty), the MERLOT data spaces (economic sovereignty), and the openHPI platform (individual sovereignty).
How do social changes, new technologies or new management trends affect communication work? A team of researchers at Leipzig University and the University of Potsdam (Germany) observed new developments in related disciplines. As a result, the five most important trends for corporate communications are identified annually and published in the Communications Trend Radar. Thus, Communications managers can identify challenges and opportunities at an early stage, take a position, address issues and make decisions. For 2023, the Communications Trend Radar identifies five key trends for corporate communications: State Revival, Scarcity Management, Unimagination, Parallel Worlds, Augemented Workflows.
During the outbreak of the COVID-19 pandemic, many people shared their symptoms across Online Social Networks (OSNs) like Twitter, hoping for others’ advice or moral support. Prior studies have shown that those who disclose health-related information across OSNs often tend to regret it and delete their publications afterwards. Hence, deleted posts containing sensitive data can be seen as manifestations of online regrets. In this work, we present an analysis of deleted content on Twitter during the outbreak of the COVID-19 pandemic. For this, we collected more than 3.67 million tweets describing COVID-19 symptoms (e.g., fever, cough, and fatigue) posted between January and April 2020. We observed that around 24% of the tweets containing personal pronouns were deleted either by their authors or by the platform after one year.
As a practical application of the resulting dataset, we explored its suitability for the automatic classification of regrettable content on Twitter.
As part of the digitization, the role of artificial systems as new actors in knowledge-intensive processes requires to recognize them as a new form of knowledge bearers side by side with traditional knowledge bearers, such as individuals, groups, organizations. By now, artificial intelligence (AI) methods were used in knowledge management (KM) for knowledge discovery, for the reinterpreting of information, and recent works focus on the studying of different AI technologies implementation for knowledge management, like big data, ontology-based methods and intelligent agents [1]. However, a lack of holistic management approach is present, that considers artificial systems as knowledge bearers. The paper therefore designs a new kind of KM approach, that integrates the technical level of knowledge and manifests as Neuronal KM (NKM). Superimposing traditional KM approaches with the NKM, the Symbiotic Knowledge Management (SKM) is conceptualized furthermore, so that human as well as artificial kinds of knowledge bearers can be managed as symbiosis. First use cases demonstrate the new KM, NKM and SKM approaches in a proof-of-concept and exemplify their differences.
Developing a new paradigm
(2020)
Internet users commonly agree that it is important for them to protect their personal data. However, the same users readily disclose their data when requested by an online service. The dichotomy between privacy attitude and actual behaviour is commonly referred to as the “privacy paradox”. Over twenty years of research were not able to provide one comprehensive explanation for the paradox and seems even further from providing actual means to overcome the paradox. We argue that the privacy paradox is not just an instantiation of the attitude-behaviour gap. Instead, we introduce a new paradigm explaining the paradox as the result of attitude-intention and intentionbehaviour gaps. Historically, motivational goal-setting psychologists addressed the issue of intentionbehaviour gaps in terms of the Rubicon Model of Action Phases and argued that commitment and volitional strength are an essential mechanism that fuel intentions and translate them into action. Thus, in this study we address the privacy paradox from a motivational psychological perspective by developing two interventions on Facebook and assess whether the 287 participants of our online experiment actually change their privacy behaviour. The results demonstrate the presence of an intentionbehaviour gap and the efficacy of our interventions in reducing the privacy paradox.
In this chapter, we conduct bibliometric performance analyses and a co-citation analysis on all articles relating to family firms indexed in Scopus and Web of Science and all articles published in the Family Business Review, Journal of Family Business Management, and the Journal of Family Business Strategy. Based on the literature sample of 4,056 articles published between 1960 and 2020 by 3,600 authors in 783 journals and their 175,163 references, we identify the most productive and most cited journals, the most cited authors, and the 25 most cited articles. Our science mapping reveals the agency theory, definitions, entrepreneurship, internationalization, ownership, resources, socioemotional wealth, and succession as the predominant research themes in family firm research. Whereas entrepreneurship explicitly appears in one of the clusters, innovation does not yet. Based on our findings, we propose a research framework and point to several research gaps to be addressed by future research.
A growing number of business processes can be characterized as knowledge-intensive. The ability to speed up the transfer of knowledge between any kind of knowledge carriers in business processes with AR techniques can lead to a huge competitive advantage, for instance in manufacturing. This includes the transfer of person-bound knowledge as well as externalized knowledge of physical and virtual objects. The contribution builds on a time-dependent knowledge transfer model and conceptualizes an adaptable, AR-based application. Having the intention to accelerate the speed of knowledge transfers between a manufacturer and an information system, empirical results of an experimentation show the validity of this approach. For the first time, it will be possible to discover how to improve the transfer among knowledge carriers of an organization with knowledge-driven information systems (KDIS). Within an experiment setting, the paper shows how to improve the quantitative effects regarding the quality and amount of time needed for an example manufacturing process realization by an adaptable KDIS.
Data sharing requires researchers to publish their (primary) data and any supporting research materials. With increased attention on reproducibility and more transparent research requiring sharing of data, the issues surrounding data sharing are moving beyond whether data sharing is beneficial, to what kind of research data should be shared and how. However, despite its benefits, data sharing still is not common practice in Information Systems (IS) research. The panel seeks to discuss the controversies related to data sharing in research, specifically focusing on the IS discipline. It remains unclear how the positive effects of data sharing that are often framed as extending beyond the individual researcher (e.g., openness for innovation) can be utilized while reducing the downsides often associated with negative consequences for the individual researcher (e.g., losing a competitive advantage). To foster data sharing practices in IS, the panel will address this dilemma by drawing on the panelists’ expertise.
Missing out on life
(2020)
Mobile devices have become an integral part of everyday life due to
their portability. As literature shows, technology use is not only beneficial but also has dark sides, such as addiction. Parents face the need to balance perceived benefits and risks of children’s exposure to mobile technologies. However, no study has uncovered what kind of benefits and concerns parents consider when implementing technology-related rules. We built on qualitative responses of 300
parents of children aged two to thirteen to explore concerns about, and perceived benefits of children’s smartphone and tablet usage, as well as the rules parents have developed regarding technology use. Findings point to concerns regarding children’s development, as well as benefits for both children and parents, and ultimately to new insights about mobile technology mediation. These results provide practical guidance for parents, physicians and mobile industry
stakeholders, trying to ensure that children are acting responsibly with mobile technology.
How messy is your news feed
(2020)
Social Networking Sites (SNSs) are pervasive in our daily lives. However, emerging reports suggest that people are increasingly dissatisfied with their experience of SNSs News Feeds. Motivated by the cognitive load theory, the paper postulates that arrangement and presentation of information are important constituents of one’s Facebook News Feed experience. Integrating these factors into the novel concept of ‘perceived disorder’, this paper hypothesizes that the perception of disorder elicited by the Facebook News Feed plays an important role in causing discontinuance intentions. Drawing on the Stressor-Strain-Outcome Model, we suggest that perceived disorder leads to SNS discontinuance intention and is partially mediated by SNS fatigue. The paper uses the responses of 268 Facebook users to investigate these relationships and introduces perceived disorder as a novel stressor. Besides adding to the existing body of literature, these insights are of relevance to internet service providers, policy makers and SNS users.
Perfectionism is a personality disposition characterized by setting extremely high performance-standards coupled with critical self-evaluations. Often conceived as positive, perfectionism can yield not only beneficial but also deleterious outcomes ranging from anxiety to burnout. In this proposal, we set out to investigate the role of the technology and, particularly, social media in individuals’ strivings for perfection. We lay down theoretical bases for the possibility that social media plays a role in the development of perfectionism. To empirically test the hypothesized relationship, we propose a comprehensive study design based on the experience sampling method. Lastly, we provide an overview of the planned analysis and future steps.
Coming back for more
(2022)
Recent spikes in social networking site (SNS) usage times have launched investigations into reasons for excessive SNS usage. Extending research on social factors (i.e., fear of missing out), this study considers the News Feed setup. More specifically, we suggest that the order of the News Feed (chronological vs. algorithmically assembled posts) affects usage behaviors. Against the background of the variable reward schedule, this study hypothesizes that the different orders exert serendipity differently. Serendipity, termed as unexpected lucky encounters with information, resembles variable rewards. Studies have evidenced a relation between variable rewards and excessive behaviors. Similarly, we hypothesize that order-induced serendipitous encounters affect SNS usage times and explore this link in a two-wave survey with an experimental setup (users using either chronological or algorithmic News Feeds). While theoretically extending explanations for increased SNS usage times by considering the News Feed order, practically the study will offer recommendations for relevant stakeholders.
Visual Social Networking Sites (SNSs) enable users to present themselves favorably to gain likes and the attention of others. Especially, Instagram is known for its focus on beauty, fitness, fashion, and dietary topics. Although a large body of research reports negative weight-related outcomes of SNS usage (e.g., body dissatisfaction, body image concerns), studies examining how SNS usage relates to these outcomes are scarce. Based on the visual normalization theory, we argue that SNS content facilitates normalization of so-called thin- and fit-ideals, thereby leading to biased perceptions of the average body weight in society. Therefore, this study tests whether Instagram use is associated with perceiving that the average person weighs less. Responses of 181 survey participants confirm that Instagram use is negatively related to average weight perception of both women and men. These findings contribute to the growing body of research on how SNS use relates to negative weight-related outcomes.
Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.
Strategic management is the deliberate engagement of an administration with the challenges of fulfilling its mission and ensuring and improving its ability to act by clarifying measures of success, an understanding of how to influence patterns of action, and organiza-tional learning. In this respect, it is not just about planning, but about an understanding of the emerging strategies of the administration in fulfilling its tasks and the use of opportunities for performance improvement, taking into account stakeholder expectations, resource base and organizational capabilities.
'Tools' in public management
(2022)
Tools are methods or procedures, and thus operational patterns of action, applied in public administrations to solve standard problems. It is also possible to consider them as structured communication according to professional standards aiming at complexity reduction. Regularly, tools in management stem on a deductive-synoptic rationale offering a seemingly ‘objective’ decision basis. They have a strong formative influence on the organization, regularly also beyond the intended effects. The prominence of tools is sometimes confused with management as such, e.g. introducing tools is mistaken as equivalent to managing for a particular purpose. However, tools have to be closely and carefully managed regarding the objectives and purposes they should serve.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
The authors propose that while tacit knowledge is a valuable resource for developing new business models, its externalization presents several challenges. One major challenge is that individuals often don’t recognize their tacit knowledge resources, while another is the reluctance to share one’s knowledge with others. Addressing these challenges, the authors present an application-oriented serious game-based haptic modeling approach for externalize tacit knowledge, which can be used to develop the first versions of business models based on tacit knowledge. Both conceptual and practical design fundamentals are presented based on elaborated theoretical approaches, which were developed with the help of a design science approach. The development of the research process is presented step by step, whereby we focused on the high accessibility of the presented research. Practitioners are presented with guidelines for implementing their serious game projects. Scientists benefit from starting points for their research topics of externalization, internalization, and socialization of tacit knowledge, development of business models, and serious games or gamification. The paper concludes with open research desiderata and questions from the presented research process.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
Introduction
(2021)
Law of raw data
(2021)
Law of Raw Data gives an overview of the legal situation across major countries and how such data is contractually handled in practice in the respective countries. In recent years, digital technologies have transformed business and society, impacting all sectors of the economy and a wide variety of areas of life. Digitization is leading to rapidly growing volumes of data with great economic potential. Data, in its raw or unstructured form, has become an important and valuable economic asset, and protection of raw data has become a crucial subject for the intellectual property community. As legislators struggle to develop a settled legal regime in this complex area, this invaluable handbook will offer a careful and dedicated analysis of the legal instruments and remedies, both existing and potential, that provide such protection across a wide variety of national legal systems.
What’s in this book:
Produced under the auspices of the International Association for the Protection of International Property (AIPPI), more than forty active specialists of the association from twenty-three countries worldwide contribute national chapters on the relevant law in their respective jurisdictions. The contributions thoroughly explain how each country approaches such crucial matters as the following:
if there is any intellectual property right available to protect raw data; the nature of such intellectual property rights that exist in unstructured data; contracts on data and which legal boundaries stand in the way of contract drafting; liability for data products or services; and questions of international private law and cross-border portability.
Each country’s rules concerning specific forms of data – such as data embedded in household appliances and consumer goods, criminal offence data, data relating to human genetics, tax and bank secrecy, medical records, and clinical trial data – are described, drawing on legislation, regulation, and case law.
How this will help you:
A matchless legal resource on one of the most important raw materials of the twenty-first century, this book provides corporate counsel, practitioners and policymakers working in the field of intellectual property rights, and concerned academics with both a broad-based global overview on emerging legal strategies in the protection of unstructured data and the latest information on existing legislation and regulation in the area.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
This chapter consists of three parts. In the first part, I will give a short overview about the integration of the protection of the environment into German constitutional law. This section will start with the presentation of the relevant provision, Art. 20a BL. Then, I will elaborate on its legal character. In the second part, I will make some brief remarks on the practical implications of Art. 20a BL. Finally, I will present some preliminary conclusions.
Article 15ter Exercise of jurisdiction over the crime of aggression (Security Council referral)
(2022)
Article 15bis. Exercise of jurisdiction over the crime of aggression (State referral, proprio motu)
(2022)
As part of the current overall process of de-formalization in international law States increasingly chose informal, non-legally binding agreements or ‘Memoranda of Understanding’ (‘MOUs') to organize their international affairs. The increasing conclusion of such legally non-binding instruments in addition to their flexibility, however, also leads to uncertainties in international relations. Against this background, this article deals with possible indirect legal consequences produced by MOUs. It discusses the different legal mechanisms and avenues that may give rise to secondary legal effects of MOUs through a process of interaction with and interpretation in line with other (formal) sources of international law. The article further considers various strategies how to avoid such eventual possible unintended or unexpected indirect legal effects of MOUs when drafting such instruments and when dealing with them subsequent to their respective ‘adoption’.
Would the world be a better place if one were to adopt a European approach to state immunity?
(2021)
This chapter argues not only that there is no European Sonderweg (or ‘special way’) when it comes to the law of state immunity but that there ought not to be one. Debates within The Hague Conference on Private International Law in the late 1990s and those leading to the adoption of the 2002 UN Convention on Jurisdictional Immunities of States, as well as the development of the EU Brussels Regulation on Jurisdiction and Enforcement, as amended in 2015, all demonstrate that state immunity was not meant to be limited by such treaties but ‘safeguarded’. Likewise, there is no proof that regional European customary law limits state immunity when it comes to ius cogens violations, as Italy and (partly) Greece are the only European states denying state immunity in such cases while the European Court of Human Rights has, time and again, upheld a broad concept of state immunity. It therefore seems unlikely that in the foreseeable future a specific European customary law norm on state immunity will develop, especially given the lack of participation in such practice by those states most concerned by the matter, including Germany. This chapter considers the possible legal implications of the jurisprudence of the Italian Constitutional Court for European military operations (if such operations went beyond peacekeeping). These implications would mainly depend on the question of attribution: if one where to assume that acts undertaken within the framework of military operations led by the EU were to be, at least also, attributable to the troop-contributing member states, the respective troop-contributing state would be entitled to enjoy state immunity exactly to the same degree as in any kind of unilateral military operations. Additionally, some possible perspectives beyond Sentenza 238/2014 are examined, in particular concerning the redress awarded by domestic courts ‘as long as’ neither the German nor the international system grant equivalent protection to the victims of serious violations of international humanitarian law committed during World War II. In the author’s opinion, strengthening the jurisdiction of international courts and tribunals, bringing interstate cases for damages before the International Court of Justice, as well as providing for claims commissions where individual compensation might be sought for violations of international humanitarian law would be more useful and appropriate mechanisms than denying state immunity.
This paper consists of two parts: In the first part, some of the challenges with which the Internationaal Criminal Court is currently confronted are being presented. First of all, the article will describe the current state of the International Criminal Court and the Rome Statue. Afterwards, the article analyses the Court’s efforts to deal with cases against third-country nationals and the challenges it is facing in that regard. In addition, the Court’s case law will be analyzed in order to determine an increasing ‘emancipation’ of the case law of the International Criminal Court from international humanitarian law. The second part of the paper will briefly discuss the role of domestic international criminal law and domestic courts in the further development and enforcement of international criminal law. As an example of the role that domestic courts may have in clarifying classic issues in international law, the judgment of the German Supreme Court of January 28, 2021 (3 StR 564/19), which deals with the status of costumary international law on functional immunity of State officials before domestic courts, shall be assessed.
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2018. Selected projects have presented their results on April 17th and November 14th 2017 at the Future SOC Lab Day events.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.