Refine
Year of publication
Document Type
- Article (145)
- Monograph/Edited Volume (26)
- Part of a Book (21)
- Conference Proceeding (20)
- Postprint (15)
- Other (7)
- Contribution to a Periodical (5)
- Doctoral Thesis (1)
- Review (1)
Keywords
- knowledge management (7)
- Industrie 4.0 (5)
- Industry 4.0 (5)
- deep reinforcement learning (5)
- production control (5)
- ERP (4)
- Enterprise-Resource-Planning (4)
- digital learning (4)
- machine learning (4)
- systematic literature review (4)
Institute
- Wirtschaftswissenschaften (139)
- Fachgruppe Betriebswirtschaftslehre (89)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (4)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (2)
- Sozialwissenschaften (2)
- Bürgerliches Recht (1)
- Department Psychologie (1)
- Extern (1)
- Forschungsbereich „Politik, Verwaltung und Management“ (1)
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
Technological advancements are giving rise to the fourth industrial revolution - Industry 4.0 -characterized by the mass employment of smart objects in highly reconfigurable and thoroughly connected industrialproduct-service systems. The purpose of this paper is to propose a theory-based knowledgedynamics model in the smart grid scenario that would provide a holistic view on the knowledge-based interactions among smart objects, humans, and other actors as an underlyingmechanism of value co-creation in Industry 4.0. A multi-loop and three-layer - physical, virtual, and interface - model of knowledge dynamics is developedby building on the concept of ba - an enabling space for interactions and theemergence of knowledge. The model depicts how big data analytics are just one component inunlocking the value of big data, whereas the tacit engagement of humans-in-the-loop - theirsense-making and decision-making - is needed for insights to be evoked fromanalytics reports and customer needs to be met.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
Cyber-physical systems (CPS) have shaped the discussion about Industry 4.0 (I4.0) for some time. To ensure the competitiveness of manufacturing enterprises the vision for the future figures out cyber-physical production systems (CPPS) as a core component of a modern factory. Adaptability and coping with complexity are (among others) potentials of this new generation of production management. The successful transformation of this theoretical construct into practical implementation can only take place with regard to the conditions characterizing the context of a factory. The subject of this contribution is a concept that takes up the brownfield character and describes a solution for extending existing (legacy) systems with CPS capabilities.
The digital transformation sets new requirements to all classes of enterprise systems in companies. ERP systems in particular, which represent the dominant class of enterprise systems, are struggling to meet the new requirements at all levels of the architecture. Therefore, there is an urgent need to reconsider the overall architecture of the systems and address the root of the related issues. Given that many restrictions ERP pose on their adaptability are related to the standardization of data, the database layer of ERP systems is addressed. Since database serve as the foundation for data storage and retrieval, they limit the flexibility of enterprise systems and the chance to adapt to new requirements accordingly. So far, relational databases are widely used. Using a systematic literature approach, recent requirements for ERP systems were identified. Prominent database approaches were assessed against the 23 requirements identified. The results reveal the strengths and weaknesses of recent database approaches. To this end, the results highlight the demand to combine multiple database approaches to fulfill recent business requirements. From a conceptual point of view, this paper supports the idea of federated databases which are interoperable to fulfill future requirements and support business operation. This research forms the basis for renewal of the current generation of ERP systems and proposes to ERP vendors to use different database concepts in the future.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
Wandlungsfähigkeit von Informationssystemen ist zu einem wesentlichen Wettbewerbsfaktor geworden. Die derzeit unzureichende methodische Unterstützung zur Umsetzung von Wandlungsfähigkeit führt in Unternehmen häufig zu ungenutzten Potentialen einer leistungsfähigen Struktur durch die eingesetzte Informationstechnologie. Ziel des Forschungsprojektes CHANGE ist es, Methoden und Vorgehensmodelle zu entwickeln, die eine dauerhafte Wandlungsfähigkeit von Informationssystemen unterstützen. Dazu wird im Rahmen dieses Beitrages ein Verfahren vorgestellt, welches der Forderung zur Ermittlung der notwendigen Wandlungsfähigkeit unter Einbeziehung des Unternehmensumfeldes nachkommt. Als wesentliches Ergebnis wird ein Kennzahlensystem entwickelt, das zum einen die Umweltsituation als Indikator für den Wandlungsdruck eines Unternehmens beschreibt. Im nächsten Schritt werden Kriterien zur Ermittlung des Wandlungspotentials der eingesetzten IT herangezogen. Abschließend werden beide Dimensionen zusammengeführt und in ihrer Bedeutung für die IT Strategie eines Unternehmens interpretiert.
Accelerating knowledge
(2019)
As knowledge-intensive processes are often carried out in teams and demand for knowledge transfers among various knowledge carriers, any optimization in regard to the acceleration of knowledge transfers obtains a great economic potential. Exemplified with product development projects, knowledge transfers focus on knowledge acquired in former situations and product generations. An adjustment in the manifestation of knowledge transfers in its concrete situation, here called intervention, therefore can directly be connected to the adequate speed optimization of knowledge-intensive process steps. This contribution presents the specification of seven concrete interventions following an intervention template. Further, it describes the design and results of a workshop with experts as a descriptive study. The workshop was used to assess the practical relevance of interventions designed as well as the identification of practical success factors and barriers of their implementation.
A growing number of business processes can be characterized as knowledge-intensive. The ability to speed up the transfer of knowledge between any kind of knowledge carriers in business processes with AR techniques can lead to a huge competitive advantage, for instance in manufacturing. This includes the transfer of person-bound knowledge as well as externalized knowledge of physical and virtual objects. The contribution builds on a time-dependent knowledge transfer model and conceptualizes an adaptable, AR-based application. Having the intention to accelerate the speed of knowledge transfers between a manufacturer and an information system, empirical results of an experimentation show the validity of this approach. For the first time, it will be possible to discover how to improve the transfer among knowledge carriers of an organization with knowledge-driven information systems (KDIS). Within an experiment setting, the paper shows how to improve the quantitative effects regarding the quality and amount of time needed for an example manufacturing process realization by an adaptable KDIS.
Für die Wettbewerbsfähigkeit von Unternehmen hat der Kontinuierliche Verbesserungsprozess (KVP) eine hohe Bedeutung. Hinsichtlich der Qualität und Quantität der Beiträge für den KVP durch die Mitarbeitenden stoßen Unternehmen, insbesondere KMU, jedoch auf vielfältige Herausforderungen. Diesen Problemen können Unternehmen durch das KVP-Tool begegnen, welches im Projekt „Adaptive Spielifizierung im KVP“ entwickelt wird. Durch die Digitalisierung und Spielifizierung des Prozes- ses im KVP-Tool wird die kontinuierliche Beteiligung nachhaltig durch intrinsische Anreize gefördert. Die Neuartigkeit des Projektes ergibt sich aus der Adaptivität der Spielifizierung, also die Wechselwirkung zu den Nutzenden. Dabei werden zwei Aspekte fokussiert: unterschiedliche Spielertypen und Marktdynamik.
Process mining (PM) has established itself in recent years as a main method for visualizing and analyzing processes. However, the identification of knowledge has not been addressed adequately because PM aims solely at data-driven discovering, monitoring, and improving real-world processes from event logs available in various information systems. The following paper, therefore, outlines a novel systematic analysis view on tools for data-driven and machine learning (ML)-based identification of knowledge-intensive target processes. To support the effectiveness of the identification process, the main contributions of this study are (1) to design a procedure for a systematic review and analysis for the selection of relevant dimensions, (2) to identify different categories of dimensions as evaluation metrics to select source systems, algorithms, and tools for PM and ML as well as include them in a multi-dimensional grid box model, (3) to select and assess the most relevant dimensions of the model, (4) to identify and assess source systems, algorithms, and tools in order to find evidence for the selected dimensions, and (5) to assess the relevance and applicability of the conceptualization and design procedure for tool selection in data-driven and ML-based process mining research.
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
The concept of adaptability has been widely recognised as research field in recent years. Business information systems play a key part in terms of business performance. Adaptability of information systems therefore is a primary goal of vendors and end-users. However, so far concepts that help to determine the adaptability of Information Systems are missing. Based on research results of the project CHANGE1 this contribution presents an integrated process model addressing the problem and a possible solution.
This paper presents an exploratory study investigating the influence of the factors (1) intermediary participation, (2) decision-making authority, (3) position in the enterprise, and (4) experience in open innovation on the perception and assessment of the benefits and risks expected from participating in open innovation projects. For this purpose, an online survey was conducted in Germany, Austria and Switzerland. The result of this paper is an empirical evidence showing whether and how these factors affect the perception of potential benefits and risks expected within the context of open innovation project participation. Furthermore, the identified effects are discussed against the theory. Existing theory regarding the benefits and risks of open innovation is expanded by (1) finding that they are perceived mostly independently of the factors, (2) confirming the practical relevance of benefits and risks, and (3) enabling a finer distinction between their degrees of relevance according to respective contextual specifics.