Refine
Has Fulltext
- no (22)
Document Type
- Part of a Book (7)
- Conference Proceeding (7)
- Article (5)
- Other (3)
Is part of the Bibliography
- yes (22)
Keywords
- Industry 4.0 (3)
- knowledge management (3)
- knowledge transfer (3)
- CPPS (2)
- CPS (2)
- Internet of things (2)
- evaluation (2)
- knowledge transfers (2)
- product generation engineering (2)
- AI and business informatics (1)
Traditional production systems are enhanced by cyber-physical systems (CPS) and Internet of Things. A kind of next generation systems, those cyber-physical production systems (CPPS) are able to raise the level of autonomy of its production components. To find the optimal degree of autonomy in a given context, a research approach is formulated using a simulation concept. Based on requirements and assumptions, a cyber-physical market is modeled and qualitative hypotheses are formulated, which will be verified with the help of the CPPS of a hybrid simulation environment.
Since more and more business tasks are enabled by Artificial Intelligence (AI)-based techniques, the number of knowledge-intensive tasks increase as trivial tasks can be automated and non-trivial tasks demand human-machine interactions. With this, challenges regarding the management of knowledge workers and machines rise [9]. Furthermore, knowledge workers experience time pressure, which can lead to a decrease in output quality. Artificial Intelligence-based systems (AIS) have the potential to assist human workers in knowledge-intensive work. By providing a domain-specific language, contextual and situational awareness as well as their process embedding can be specified, which enables the management of human and AIS to ease knowledge transfer in a way that process time, cost and quality are improved significantly. This contribution outlines a framework to designing these systems and accounts for their implementation.
Faced with the triad of time-cost-quality, the realization of knowledge-intensive tasks at economic conditions is not trivial. Since the number of knowledge-intensive processes is increasing more and more nowadays, the efficient design of knowledge transfers at business processes as well as the target-oriented improvement of them is essential, so that process outcomes satisfy high quality criteria and economic requirements. This particularly challenges knowledge management, aiming for the assignment of ideal manifestations of influence factors on knowledge transfers to a certain task. Faced with first attempts of knowledge transfer-based process improvements [1], this paper continues research about the quantitative examination of knowledge transfers and presents a ready-to-go experiment design that is able to examine quality of knowledge transfers empirically and is suitable to examine knowledge transfers on a quantitative level. Its use is proven by the example of four influence factors, which namely are stickiness, complexity, competence and time pressure.
A growing number of business processes can be characterized as knowledge-intensive. The ability to speed up the transfer of knowledge between any kind of knowledge carriers in business processes with AR techniques can lead to a huge competitive advantage, for instance in manufacturing. This includes the transfer of person-bound knowledge as well as externalized knowledge of physical and virtual objects. The contribution builds on a time-dependent knowledge transfer model and conceptualizes an adaptable, AR-based application. Having the intention to accelerate the speed of knowledge transfers between a manufacturer and an information system, empirical results of an experimentation show the validity of this approach. For the first time, it will be possible to discover how to improve the transfer among knowledge carriers of an organization with knowledge-driven information systems (KDIS). Within an experiment setting, the paper shows how to improve the quantitative effects regarding the quality and amount of time needed for an example manufacturing process realization by an adaptable KDIS.
Process mining (PM) has established itself in recent years as a main method for visualizing and analyzing processes. However, the identification of knowledge has not been addressed adequately because PM aims solely at data-driven discovering, monitoring, and improving real-world processes from event logs available in various information systems. The following paper, therefore, outlines a novel systematic analysis view on tools for data-driven and machine learning (ML)-based identification of knowledge-intensive target processes. To support the effectiveness of the identification process, the main contributions of this study are (1) to design a procedure for a systematic review and analysis for the selection of relevant dimensions, (2) to identify different categories of dimensions as evaluation metrics to select source systems, algorithms, and tools for PM and ML as well as include them in a multi-dimensional grid box model, (3) to select and assess the most relevant dimensions of the model, (4) to identify and assess source systems, algorithms, and tools in order to find evidence for the selected dimensions, and (5) to assess the relevance and applicability of the conceptualization and design procedure for tool selection in data-driven and ML-based process mining research.
The collaboration during the modeling process is uncomfortable and characterized by various limitations. Faced with the successful transfer of first process modeling languages to the augmented world, non-transparent processes can be visualized in a more comprehensive way. With the aim to rise comfortability, speed, accuracy and manifoldness of real world process augmentations, a framework for the bidirectional interplay of the common process modeling world and the augmented world has been designed as morphologic box. Its demonstration proves the working of drawn AR integrations. Identified dimensions were derived from (1) a designed knowledge construction axiom, (2) a designed meta-model, (3) designed use cases and (4) designed directional interplay modes. Through a workshop-based survey, the so far best AR modeling configuration is identified, which can serve for benchmarks and implementations.
As Industry 4.0 infrastructures are seen as highly evolutionary environment with volatile, and time-dependent workloads for analytical tasks, particularly the optimal dimensioning of IT hardware is a challenge for decision makers because the digital processing of these tasks can be decoupled from their physical place of origin. Flexible architecture models to allocate tasks efficiently with regard to multi-facet aspects and a predefined set of local systems and external cloud services have been proven in small example scenarios. This paper provides a benchmark of existing task realization strategies, composed of (1) task distribution and (2) task prioritization in a real-world scenario simulation. It identifies heuristics as superior strategies.
Künstliche Intelligenz ist in aller Munde. Immer mehr Anwendungsbereiche werden durch die Auswertung von vorliegenden Daten mit Algorithmen und Frameworks z.B. des Maschinellen Lernens erschlossen. Dieses Buch hat das Ziel, einen Überblick über gegenwärtig vorhandene Lösungen zu geben und darüber hinaus konkrete Hilfestellung bei der Auswahl von Algorithmen oder Tools bei spezifischen Problemstellungen zu bieten. Um diesem Anspruch gerecht zu werden, wurden 90 Lösungen mittels einer systematischen Literaturrecherche und Praxissuche identifiziert sowie anschließend klassifiziert. Mit Hilfe dieses Buches gelingt es, schnell die notwendigen Grundlagen zu verstehen, gängige Anwendungsgebiete zu identifizieren und den Prozess zur Auswahl eines passenden ML-Tools für das eigene Projekt systematisch zu meistern.
Künstliche Intelligenz (KI) gewinnt in zahlreichen Branchen rasant an Bedeutung und wird zunehmend auch in Enterprise Resource Planning (ERP)-Systemen als Anwendungsbereich erschlossen. Die Idee, dass Maschinen die kognitiven Fähigkeiten des Menschen imitieren können, indem Wissen durch Lernen auf Basis von Beispielen in Daten, Informationen und Erfahrungen generiert wird, ist heute ein Schlüsselelement der digitalen Transformation. Jedoch charakterisiert der Einsatz von KI in ERP-System einen hohen Komplexitätsgrad, da die KI als Querschnittstechnologie zu verstehen ist, welche in unterschiedlichen Unternehmensbereichen zum Einsatz kommen kann. Auch die Anwendungsgrade können sich dabei erheblich voneinander unterscheiden. Um trotz dieser Komplexität den Einsatz der KI in ERP-Systemen erfassen und systembezogen vergleichen zu können, wurde im Rahmen dieser Studie ein Reifegradmodell entwickelt. Dieses bildet die Ausgangsbasis zur Ermittlung der KI-Reife in ERP-Systemen und grenzt dabei die folgenden vier KI- bzw. systembezogenen Ebenen voneinander ab: 1) Technische Möglichkeiten, 2) Datenreife, 3) Funktionsreife und 4) Erklärfähigkeit des Systems.
Modern production infrastructures of globally operating companies usually consist of multiple distributed production sites. While the organization of individual sites consisting of Industry 4.0 components itself is demanding, new questions regarding the organization and allocation of resources emerge considering the total production network. In an attempt to face the challenge of efficient distribution and processing both within and across sites, we aim to provide a hybrid simulation approach as a first step towards optimization. Using hybrid simulation allows us to include real and simulated concepts and thereby benchmark different approaches with reasonable effort. A simulation concept is conceptualized and demonstrated qualitatively using a global multi-site example.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.
Accelerating knowledge
(2019)
As knowledge-intensive processes are often carried out in teams and demand for knowledge transfers among various knowledge carriers, any optimization in regard to the acceleration of knowledge transfers obtains a great economic potential. Exemplified with product development projects, knowledge transfers focus on knowledge acquired in former situations and product generations. An adjustment in the manifestation of knowledge transfers in its concrete situation, here called intervention, therefore can directly be connected to the adequate speed optimization of knowledge-intensive process steps. This contribution presents the specification of seven concrete interventions following an intervention template. Further, it describes the design and results of a workshop with experts as a descriptive study. The workshop was used to assess the practical relevance of interventions designed as well as the identification of practical success factors and barriers of their implementation.
Already successfully used products or designs, past projects or our own experiences can be the basis for the development of new products. As reference products or existing knowledge, it is reused in the development process and across generations of products. Since further, products are developed in cooperation, the development of new product generations is characterized by knowledge-intensive processes in which information and knowledge are exchanged between different kinds of knowledge carriers. The particular knowledge transfer here describes the identification of knowledge, its transmission from the knowledge carrier to the knowledge receiver, and its application by the knowledge receiver, which includes embodied knowledge of physical products. Initial empirical findings of the quantitative effects regarding the speed of knowledge transfers already have been examined. However, the factors influencing the quality of knowledge transfer to increase the efficiency and effectiveness of knowledge transfer in product development have not yet been examined empirically. Therefore, this paper prepares an experimental setting for the empirical investigation of the quality of knowledge transfers.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
Purpose
The purpose of this study was to investigate work-related adaptive performance from a longitudinal process perspective. This paper clustered specific behavioral patterns following the introduction of a change and related them to retentivity as an individual cognitive ability. In addition, this paper investigated whether the occurrence of adaptation errors varied depending on the type of change content.
Design/methodology/approach
Data from 35 participants collected in the simulated manufacturing environment of a Research and Application Center Industry 4.0 (RACI) were analyzed. The participants were required to learn and train a manufacturing process in the RACI and through an online training program. At a second measurement point in the RACI, specific manufacturing steps were subject to change and participants had to adapt their task execution. Adaptive performance was evaluated by counting the adaptation errors.
Findings
The participants showed one of the following behavioral patterns: (1) no adaptation errors, (2) few adaptation errors, (3) repeated adaptation errors regarding the same actions, or (4) many adaptation errors distributed over many different actions. The latter ones had a very low retentivity compared to the other groups. Most of the adaptation errors were made when new actions were added to the manufacturing process.
Originality/value
Our study adds empirical research on adaptive performance and its underlying processes. It contributes to a detailed understanding of different behaviors in change situations and derives implications for organizational change management.
Purpose
With shorter product cycles and a growing number of knowledge-intensive business processes, time consumption is a highly relevant target factor in measuring the performance of contemporary business processes. This research aims to extend prior research on the effects of knowledge transfer velocity at the individual level by considering the effect of complexity, stickiness, competencies, and further demographic factors on knowledge-intensive business processes at the conversion-specific levels.
Design/methodology/approach
We empirically assess the impact of situation-dependent knowledge transfer velocities on time consumption in teams and individuals. Further, we issue the demographic effect on this relationship. We study a sample of 178 experiments of project teams and individuals applying ordinary least squares (OLS) for regression analysis-based modeling.
Findings
The authors find that time consumed at knowledge transfers is negatively associated with the complexity of tasks. Moreover, competence among team members has a complementary effect on this relationship and stickiness retards knowledge transfers. Thus, while demographic factors urgently need to be considered for effective and speedy knowledge transfers, these influencing factors should be addressed on a conversion-specific basis so that some tasks are realized in teams best while others are not. Guidelines and interventions are derived to identify best task realization variants, so that process performance is improved by a new kind of process improvement method.
Research limitations/implications
This study establishes empirically the importance of conversion-specific influence factors and demographic factors as drivers of high knowledge transfer velocities in teams and among individuals. The contribution connects the field of knowledge management to important streams in the wider business literature: process improvement, management of knowledge resources, design of information systems, etc. Whereas the model is highly bound to the experiment tasks, it has high explanatory power and high generalizability to other contexts.
Practical implications
Team managers should take care to allow the optimal knowledge transfer situation within the team. This is particularly important when knowledge sharing is central, e.g. in product development and consulting processes. If this is not possible, interventions should be applied to the individual knowledge transfer situation to improve knowledge transfers among team members.
Social implications
Faster and more effective knowledge transfers improve the performance of both commercial and non-commercial organizations. As nowadays, the individual is faced with time pressure to finalize tasks, the deliberated increase of knowledge transfer velocity is a core capability to realize this goal. Quantitative knowledge transfer models result in more reliable predictions about the duration of knowledge transfers. These allow the target-oriented modification of knowledge transfer situations so that processes speed up, private firms are more competitive and public services are faster to citizens.
Originality/value
Time consumption is an increasingly relevant factor in contemporary business but so far not been explored in experiments at all. This study extends current knowledge by considering quantitative effects on knowledge velocity and improved knowledge transfers.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.