Refine
Year of publication
Document Type
- Article (143)
- Monograph/Edited Volume (26)
- Part of a Book (22)
- Conference Proceeding (18)
- Postprint (15)
- Other (7)
- Contribution to a Periodical (5)
- Doctoral Thesis (1)
- Review (1)
Keywords
- knowledge management (7)
- Industrie 4.0 (5)
- Industry 4.0 (5)
- deep reinforcement learning (5)
- production control (5)
- ERP (4)
- Enterprise-Resource-Planning (4)
- digital learning (4)
- machine learning (4)
- systematic literature review (4)
Institute
- Wirtschaftswissenschaften (139)
- Fachgruppe Betriebswirtschaftslehre (86)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (4)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (2)
- Sozialwissenschaften (2)
- Bürgerliches Recht (1)
- Department Psychologie (1)
- Extern (1)
- Forschungsbereich „Politik, Verwaltung und Management“ (1)
Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment (N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
Willentliches Vergessen
(2019)
Dieser Beitrag im Journal Gruppe. Interaktion. Organisation. stellt dar, wie willentliches Vergessen die Anpassung an notwendige Veränderungen für Individuen, Gruppen und Organisationen verbessert und wie willentliches Vergessen bewusst und gezielt gestaltet werden kann.
Damit Verhalten in Folge einer notwendigen Veränderung angepasst wird, reicht es nicht aus, dass Menschen wissen was zu tun ist, willens und in der Lage sind ihr Verhalten zu verändern. Eine Veränderung gelingt nur dann, wenn nur noch das neue Verhalten zur Anwendung kommt und nicht mehr das Alte, wenn das alte Verhalten vergessen wird. Der notwendige Prozess des willentlichen Vergessens ist durch Entfernen von Hinweisreizen, die die Erinnerung des zu Vergessenden und durch Platzierung von Hinweisreizen, die die Aktivierung des Neuen auslösen, gestaltbar.
Der vorliegende Beitrag stellt die förderliche Wirkung von Hinweisreizen auf willentliches Vergessen dar, stellt sie im Rahmen des Berichts einer experimentellen Studie unter Beweis und gibt praktische Implikationen, wie für Individuen, Gruppen und Organisationen willentliches Vergessen gestaltet werden kann.
Die Autoren stellen einen Ansatz vor, mit dem das Erfahrungswissen über die Produktionsregelung mit künstlichen neuronalen Netzen so aufgearbeitet und strukturiert werden kann, dass mittels des fallbasierten Schließens für neue Produktionssituationen die geeigneten und entsprechend voreingestellten neuronalen Netze ausgewählt werden können. Aufbauend auf den so gewonnenen Ergebnissen wird ein fallbasiertes System entwickelt.
Im Beitrag wird dargestellt, wie der Einsatz von Wissensmanagementwerkzeugen einen wichtigen Beitrag zur Gestaltung eines erfolgreichen ERP-Betriebes leisten kann. Da diese Aufgabe einen hohen Anteil an wissensintensiven Prozessen aufweist, wird dazu eine Methode benötigt, welches die Prozesse strukturiert erfasst, durch die Modellierung analysiert und geeignete Maßnahmen des Wissensmanagements vorschlägt. Die Methode KMDL® ermöglicht durch die Einführung einer Wissensebene die Spezifikation, wann Wissen welchen Inhalts und in welcher Form im Prozess benötigt oder erzeugt wird. Dabei wird auch darauf eingegangen, wie durch die Anwendung der KMDL® der optimale Schulungsbedarf für ERP-Anwender ermittelt werden kann.
Optimierung werksübergreifender Geschäftsprozesse am Beispiel der Automobilzuliefererindustrie
(2005)
Die Bedeutung der Zulieferer für die Automobilhersteller wächst stetig, weil die Zulieferer immer stärker in den gesamten Wertschöpfungsprozess des Herstellers eingebunden werden und somit ständig Planung, Qualität und Logistikabläufe optimiert werden müssen. Betriebliche Anwendungen, wie Enterprise Resource Planning- (ERP-) oder Produktionsplanung- und –steuerungs-(PPS-) Systeme werden benötigt, um den störungsfreien und reibungslosen Ablauf der Geschäftsprozesse und damit die ständige Lieferfähigkeit gegenüber dem Automobilhersteller zu garantieren [1]. Anhand eines Marktüberblicks werden in diesem Beitrag innovative Ansätze, Möglichkeiten und Koordinationsmechanismen zur Unterstützung der Produktion in verteilten Standorten von aktuellen ERP-/PPS-Systemen vorgestellt.
Die Bedeutung der Zulieferer für die Automobilhersteller wächst stetig, weil die Zulieferer immer stärker in den gesamten Wertschöpfungsprozess des Herstellers eingebunden werden und somit ständig Planung, Qualität und Logistikabläufe optimiert werden müssen. Betriebliche Anwendungen, wie Enterprise Resource Planning- (ERP-) oder Produktionsplanung- und -steuerungs- (PPS-) Systeme werden benötigt, um den störungsfreien und reibungslosen Ablauf der Geschäftsprozesse und damit die ständige Lieferfähigkeit gegenüber dem Automobilhersteller zu garantieren [1]. Anhand eines Marktüberblicks werden in diesem Beitrag innovative Ansätze, Möglichkeiten und Koordinationsmechanismen zur Unterstützung der Produktion in verteilten Standorten von aktuellen ERP-/PPS-Systemen vorgestellt.
Terminology is a critical instrument for each researcher. Different terminologies for the same research object may arise in different research communities. By this inconsistency, many synergistic effects get lost. Theories and models will be more understandable and reusable if a common terminology is applied. This paper examines the terminological (in)consistence for the research field of job-shop scheduling by a literature review. There is an enormous variety in the choice of terms and mathematical notation for the same concept. The comparability, reusability and combinability of scheduling methods is unnecessarily hampered by the arbitrary use of homonyms and synonyms. The acceptance in the community of used variables and notation forms is shown by means of a compliance quotient. This is proven by the evaluation of 240 scientific publications on planning methods.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
Openness indicators for the evaluation of digital platforms between the launch and maturity phase
(2024)
In recent years, the evaluation of digital platforms has become an important focus in the field of information systems science. The identification of influential indicators that drive changes in digital platforms, specifically those related to openness, is still an unresolved issue. This paper addresses the challenge of identifying measurable indicators and characterizing the transition from launch to maturity in digital platforms. It proposes a systematic analytical approach to identify relevant openness indicators for evaluation purposes. The main contributions of this study are the following (1) the development of a comprehensive procedure for analyzing indicators, (2) the categorization of indicators as evaluation metrics within a multidimensional grid-box model, (3) the selection and evaluation of relevant indicators, (4) the identification and assessment of digital platform architectures during the launch-to-maturity transition, and (5) the evaluation of the applicability of the conceptualization and design process for digital platform evaluation.
Process mining (PM) has established itself in recent years as a main method for visualizing and analyzing processes. However, the identification of knowledge has not been addressed adequately because PM aims solely at data-driven discovering, monitoring, and improving real-world processes from event logs available in various information systems. The following paper, therefore, outlines a novel systematic analysis view on tools for data-driven and machine learning (ML)-based identification of knowledge-intensive target processes. To support the effectiveness of the identification process, the main contributions of this study are (1) to design a procedure for a systematic review and analysis for the selection of relevant dimensions, (2) to identify different categories of dimensions as evaluation metrics to select source systems, algorithms, and tools for PM and ML as well as include them in a multi-dimensional grid box model, (3) to select and assess the most relevant dimensions of the model, (4) to identify and assess source systems, algorithms, and tools in order to find evidence for the selected dimensions, and (5) to assess the relevance and applicability of the conceptualization and design procedure for tool selection in data-driven and ML-based process mining research.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep rein- forcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensor- and process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep reinforcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensorand process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
Nowadays, production planning and control must cope with mass customization, increased fluctuations in demand, and high competition pressures. Despite prevailing market risks, planning accuracy and increased adaptability in the event of disruptions or failures must be ensured, while simultaneously optimizing key process indicators. To manage that complex task, neural networks that can process large quantities of high-dimensional data in real time have been widely adopted in recent years. Although these are already extensively deployed in production systems, a systematic review of applications and implemented agent embeddings and architectures has not yet been conducted. The main contribution of this paper is to provide researchers and practitioners with an overview of applications and applied embeddings and to motivate further research in neural agent-based production. Findings indicate that neural agents are not only deployed in diverse applications, but are also increasingly implemented in multi-agent environments or in combination with conventional methods — leveraging performances compared to benchmarks and reducing dependence on human experience. This not only implies a more sophisticated focus on distributed production resources, but also broadening the perspective from a local to a global scale. Nevertheless, future research must further increase scalability and reproducibility to guarantee a simplified transfer of results to reality.
Nowadays, production planning and control must cope with mass customization, increased fluctuations in demand, and high competition pressures. Despite prevailing market risks, planning accuracy and increased adaptability in the event of disruptions or failures must be ensured, while simultaneously optimizing key process indicators. To manage that complex task, neural networks that can process large quantities of high-dimensional data in real time have been widely adopted in recent years. Although these are already extensively deployed in production systems, a systematic review of applications and implemented agent embeddings and architectures has not yet been conducted. The main contribution of this paper is to provide researchers and practitioners with an overview of applications and applied embeddings and to motivate further research in neural agent-based production. Findings indicate that neural agents are not only deployed in diverse applications, but are also increasingly implemented in multi-agent environments or in combination with conventional methods — leveraging performances compared to benchmarks and reducing dependence on human experience. This not only implies a more sophisticated focus on distributed production resources, but also broadening the perspective from a local to a global scale. Nevertheless, future research must further increase scalability and reproducibility to guarantee a simplified transfer of results to reality.
Process oriented knowledge management focuses on knowledge intensive business processes. For modelling and analysis of these processes the modelling technique KMDL (Knowledge Modeling and Description Language) has been developed. KMDL is a method to describe knowledge flows and conversions along and between business processes. Thereby KMDL identifies existing and utilized information as well as knowledge of individual participants and of the entire company. This research-in-progress contribution introduces a practical example in the field of software engineering, in which KMDL models are evaluated to identify process improvements, e.g. by adding knowledge management activities. Therefore three individual views focussing on selected aspects of interest are introduced.
Requirements for an integration of methods analyzing social issues in knowledge organizations
(2006)
CO₂-Fußabdrücke sind ein aktuell viel diskutiertes Thema mit weitreichenden Implikationen für Individuen als auch Unternehmen. Firmen können einen proaktiven Beitrag zur Transparenz leisten, indem der unternehmens- oder produktbezogene CO₂-Fußabdruck ausgewiesen wird. Ist der Entschluss gefasst einen CO₂-Fußabdruck auszuweisen und die entstehenden Treibhausgase zu erfassen, existiert eine Vielzahl unterschiedlicher Normen und Zertifikate, wie die publicly available specification 2050, das Greenhouse Gas Protokoll oder die ISO 14067. Das Ziel dieses Beitrags ist es, diese drei Normen zur Berechnung des produktbezogenen CO₂-Fußabdrucks zu vergleichen, um Gemeinsamkeiten und Unterschiede sowie Vor- und Nachteile in der Anwendung aufzuzeigen. Die Übersicht soll Unternehmen bei der Entscheidungsfindung hinsichtlich der Eignung eines CO₂-Fußabdrucks für ihr Unternehmen unterstützen.
Cyber-physical systems (CPS) have shaped the discussion about Industry 4.0 (I4.0) for some time. To ensure the competitiveness of manufacturing enterprises the vision for the future figures out cyber-physical production systems (CPPS) as a core component of a modern factory. Adaptability and coping with complexity are (among others) potentials of this new generation of production management. The successful transformation of this theoretical construct into practical implementation can only take place with regard to the conditions characterizing the context of a factory. The subject of this contribution is a concept that takes up the brownfield character and describes a solution for extending existing (legacy) systems with CPS capabilities.
From employee to expert
(2021)
In the context of the collaborative project Ageing-appropriate, process-oriented and interactive further training in SME (API-KMU), innovative solutions for the challenges of demographic change and digitalisation are being developed for SMEs. To this end, an approach to age-appropriate training will be designed with the help of AR technology. In times of the corona pandemic, a special research design is necessary for the initial survey of the current state in the companies, which will be systematically elaborated in this paper. The results of the previous methodological considerations illustrate the necessity of a mix of methods to generate a deeper insight into the work processes. Video-based retrospective interviews seem to be a suitable instrument to adequately capture the employees' interpretative perspectives on their work activities. In conclusion, the paper identifies specific challenges, such as creating acceptance among employees, open questions, e.g., how a transfer or generalization of the results can succeed, and hypotheses that will have to be tested in the further course of the research process.
To cope with the already large, and ever increasing, amount of information stored in organizational memory, "forgetting," as an important human memory process, might be transferred to the organizational context. Especially in intentionally planned change processes (e.g., change management), forgetting is an important precondition to impede the recall of obsolete routines and adapt to new strategic objectives accompanied by new organizational routines. We first comprehensively review the literature on the need for organizational forgetting and particularly on accidental vs. intentional forgetting. We discuss the current state of the art of theory and empirical evidence on forgetting from cognitive psychology in order to infer mechanisms applicable to the organizational context. In this respect, we emphasize retrieval theories and the relevance of retrieval cues important for forgetting. Subsequently, we transfer the empirical evidence that the elimination of retrieval cues leads to faster forgetting to the forgetting of organizational routines, as routines are part of organizational memory. We then propose a classification of cues (context, sensory, business process-related cues) that are relevant in the forgetting of routines, and discuss a meta-cue called the "situational strength" cue, which is relevant if cues of an old and a new routine are present simultaneously. Based on the classification as business process-related cues (information, team, task, object cues), we propose mechanisms to accelerate forgetting by eliminating specific cues based on the empirical and theoretical state of the art. We conclude that in intentional organizational change processes, the elimination of cues to accelerate forgetting should be used in change management practices.
To cope with the already large, and ever increasing, amount of information stored in organizational memory, "forgetting," as an important human memory process, might be transferred to the organizational context. Especially in intentionally planned change processes (e.g., change management), forgetting is an important precondition to impede the recall of obsolete routines and adapt to new strategic objectives accompanied by new organizational routines. We first comprehensively review the literature on the need for organizational forgetting and particularly on accidental vs. intentional forgetting. We discuss the current state of the art of theory and empirical evidence on forgetting from cognitive psychology in order to infer mechanisms applicable to the organizational context. In this respect, we emphasize retrieval theories and the relevance of retrieval cues important for forgetting. Subsequently, we transfer the empirical evidence that the elimination of retrieval cues leads to faster forgetting to the forgetting of organizational routines, as routines are part of organizational memory. We then propose a classification of cues (context, sensory, business process-related cues) that are relevant in the forgetting of routines, and discuss a meta-cue called the "situational strength" cue, which is relevant if cues of an old and a new routine are present simultaneously. Based on the classification as business process-related cues (information, team, task, object cues), we propose mechanisms to accelerate forgetting by eliminating specific cues based on the empirical and theoretical state of the art. We conclude that in intentional organizational change processes, the elimination of cues to accelerate forgetting should be used in change management practices.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
This meta-analysis synthesizes 332 effect sizes of various methods to enhance creativity. We clustered all studies into 12 methods to identify the most effective creativity enhancement methods. We found that, on average, creativity can be enhanced, Hedges’ g = 0.53, 95% CI [0.44, 0.61], with 70.09% of the participants in the enhancement conditions being more creative than the average person in the control conditions. Complex training courses, meditation, and cultural exposure were the most effective (gs = 0.66) while the use of cognitive manipulation drugs was the least and also noneffective, g = 0.10. The type of training material was also important. For instance, figural methods were more effective in enhancing creativity, and enhancing converging thinking was more effective than enhancing divergent thinking. Study effect sizes varied considerably across all studies and for many subgroup analyses, suggesting that researchers can plausibly expect to find reversed effects occasionally. We found no evidence of publication bias. We discuss theoretical implications and suggest future directions for best practices in enhancing creativity. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
Künstliche Intelligenz ist in aller Munde. Immer mehr Anwendungsbereiche werden durch die Auswertung von vorliegenden Daten mit Algorithmen und Frameworks z.B. des Maschinellen Lernens erschlossen. Dieses Buch hat das Ziel, einen Überblick über gegenwärtig vorhandene Lösungen zu geben und darüber hinaus konkrete Hilfestellung bei der Auswahl von Algorithmen oder Tools bei spezifischen Problemstellungen zu bieten. Um diesem Anspruch gerecht zu werden, wurden 90 Lösungen mittels einer systematischen Literaturrecherche und Praxissuche identifiziert sowie anschließend klassifiziert. Mit Hilfe dieses Buches gelingt es, schnell die notwendigen Grundlagen zu verstehen, gängige Anwendungsgebiete zu identifizieren und den Prozess zur Auswahl eines passenden ML-Tools für das eigene Projekt systematisch zu meistern.
Accelerating knowledge
(2019)
As knowledge-intensive processes are often carried out in teams and demand for knowledge transfers among various knowledge carriers, any optimization in regard to the acceleration of knowledge transfers obtains a great economic potential. Exemplified with product development projects, knowledge transfers focus on knowledge acquired in former situations and product generations. An adjustment in the manifestation of knowledge transfers in its concrete situation, here called intervention, therefore can directly be connected to the adequate speed optimization of knowledge-intensive process steps. This contribution presents the specification of seven concrete interventions following an intervention template. Further, it describes the design and results of a workshop with experts as a descriptive study. The workshop was used to assess the practical relevance of interventions designed as well as the identification of practical success factors and barriers of their implementation.
Künstliche Intelligenz (KI) gewinnt in zahlreichen Branchen rasant an Bedeutung und wird zunehmend auch in Enterprise Resource Planning (ERP)-Systemen als Anwendungsbereich erschlossen. Die Idee, dass Maschinen die kognitiven Fähigkeiten des Menschen imitieren können, indem Wissen durch Lernen auf Basis von Beispielen in Daten, Informationen und Erfahrungen generiert wird, ist heute ein Schlüsselelement der digitalen Transformation. Jedoch charakterisiert der Einsatz von KI in ERP-System einen hohen Komplexitätsgrad, da die KI als Querschnittstechnologie zu verstehen ist, welche in unterschiedlichen Unternehmensbereichen zum Einsatz kommen kann. Auch die Anwendungsgrade können sich dabei erheblich voneinander unterscheiden. Um trotz dieser Komplexität den Einsatz der KI in ERP-Systemen erfassen und systembezogen vergleichen zu können, wurde im Rahmen dieser Studie ein Reifegradmodell entwickelt. Dieses bildet die Ausgangsbasis zur Ermittlung der KI-Reife in ERP-Systemen und grenzt dabei die folgenden vier KI- bzw. systembezogenen Ebenen voneinander ab: 1) Technische Möglichkeiten, 2) Datenreife, 3) Funktionsreife und 4) Erklärfähigkeit des Systems.
Since more and more business tasks are enabled by Artificial Intelligence (AI)-based techniques, the number of knowledge-intensive tasks increase as trivial tasks can be automated and non-trivial tasks demand human-machine interactions. With this, challenges regarding the management of knowledge workers and machines rise [9]. Furthermore, knowledge workers experience time pressure, which can lead to a decrease in output quality. Artificial Intelligence-based systems (AIS) have the potential to assist human workers in knowledge-intensive work. By providing a domain-specific language, contextual and situational awareness as well as their process embedding can be specified, which enables the management of human and AIS to ease knowledge transfer in a way that process time, cost and quality are improved significantly. This contribution outlines a framework to designing these systems and accounts for their implementation.
Already successfully used products or designs, past projects or our own experiences can be the basis for the development of new products. As reference products or existing knowledge, it is reused in the development process and across generations of products. Since further, products are developed in cooperation, the development of new product generations is characterized by knowledge-intensive processes in which information and knowledge are exchanged between different kinds of knowledge carriers. The particular knowledge transfer here describes the identification of knowledge, its transmission from the knowledge carrier to the knowledge receiver, and its application by the knowledge receiver, which includes embodied knowledge of physical products. Initial empirical findings of the quantitative effects regarding the speed of knowledge transfers already have been examined. However, the factors influencing the quality of knowledge transfer to increase the efficiency and effectiveness of knowledge transfer in product development have not yet been examined empirically. Therefore, this paper prepares an experimental setting for the empirical investigation of the quality of knowledge transfers.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Faced with the triad of time-cost-quality, the realization of knowledge-intensive tasks at economic conditions is not trivial. Since the number of knowledge-intensive processes is increasing more and more nowadays, the efficient design of knowledge transfers at business processes as well as the target-oriented improvement of them is essential, so that process outcomes satisfy high quality criteria and economic requirements. This particularly challenges knowledge management, aiming for the assignment of ideal manifestations of influence factors on knowledge transfers to a certain task. Faced with first attempts of knowledge transfer-based process improvements [1], this paper continues research about the quantitative examination of knowledge transfers and presents a ready-to-go experiment design that is able to examine quality of knowledge transfers empirically and is suitable to examine knowledge transfers on a quantitative level. Its use is proven by the example of four influence factors, which namely are stickiness, complexity, competence and time pressure.
A growing number of business processes can be characterized as knowledge-intensive. The ability to speed up the transfer of knowledge between any kind of knowledge carriers in business processes with AR techniques can lead to a huge competitive advantage, for instance in manufacturing. This includes the transfer of person-bound knowledge as well as externalized knowledge of physical and virtual objects. The contribution builds on a time-dependent knowledge transfer model and conceptualizes an adaptable, AR-based application. Having the intention to accelerate the speed of knowledge transfers between a manufacturer and an information system, empirical results of an experimentation show the validity of this approach. For the first time, it will be possible to discover how to improve the transfer among knowledge carriers of an organization with knowledge-driven information systems (KDIS). Within an experiment setting, the paper shows how to improve the quantitative effects regarding the quality and amount of time needed for an example manufacturing process realization by an adaptable KDIS.
The collaboration during the modeling process is uncomfortable and characterized by various limitations. Faced with the successful transfer of first process modeling languages to the augmented world, non-transparent processes can be visualized in a more comprehensive way. With the aim to rise comfortability, speed, accuracy and manifoldness of real world process augmentations, a framework for the bidirectional interplay of the common process modeling world and the augmented world has been designed as morphologic box. Its demonstration proves the working of drawn AR integrations. Identified dimensions were derived from (1) a designed knowledge construction axiom, (2) a designed meta-model, (3) designed use cases and (4) designed directional interplay modes. Through a workshop-based survey, the so far best AR modeling configuration is identified, which can serve for benchmarks and implementations.
As Industry 4.0 infrastructures are seen as highly evolutionary environment with volatile, and time-dependent workloads for analytical tasks, particularly the optimal dimensioning of IT hardware is a challenge for decision makers because the digital processing of these tasks can be decoupled from their physical place of origin. Flexible architecture models to allocate tasks efficiently with regard to multi-facet aspects and a predefined set of local systems and external cloud services have been proven in small example scenarios. This paper provides a benchmark of existing task realization strategies, composed of (1) task distribution and (2) task prioritization in a real-world scenario simulation. It identifies heuristics as superior strategies.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Mittelständische Industrieunternehmen setzen für ihre betrieblichen Abläufe Planungs- und Ausführungssysteme ein. Aufgrund der Turbulenzen auf Absatz- und Beschaffungsmärkten kann die Wirtschaftlichkeit und Wettbewerbsfähigkeit dieser Unternehmen nur durch permanente Anpassungen der Organisationsstrukturen und -abläufe erfolgen. In der Praxis zeigt sich eine unzureichende technologische Anpassungsfähigkeit der heute eingesetzten Standardsoftwaresysteme. Diese lassen zwar während der Einführungsphase vielfältige Konfigurationsmöglichkeiten zu, Veränderungen im laufenden Betrieb sind aber meist nur mit großem Aufwand möglich. Hier sind die Softwarehersteller in Zukunft zunehmend gefordert, wandlungsfähige Auftragsabwicklungssysteme zu entwickeln. Über die Entwicklungsphase (Build-Time) hinaus muss auch parallel zur Betriebsphase (Run-Time) der technische Fortschritt aufgrund von geänderten Anforderungen durch entsprechende Softwarereleases synchronisiert werden.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
Not only the public services are able to ensure the effective and efficient use of e-democracy tools. This contribution points out how a party must be structured to function as a neutral service provider for the citizen to set the results of electronic decision-making processes generally binding. The party provides only the methodology and the technology of decision making. Contents are defined exclusively from the citizens. These contents and voting results are implemented obligatorily in the parliament by the delegates of the party. The electronic democracy contributes, in order to supplement the representative democracy, scalable around direct democratic elements. The citizens can determine all 4 or 5 years with the national elections, how much each political decision has to be affected direct democratically by edemocracy tools. Such an approach is subject to other requirements than a governmental offered service.