Refine
Has Fulltext
- no (34)
Year of publication
Document Type
- Conference Proceeding (14)
- Part of a Book (8)
- Article (7)
- Other (3)
- Monograph/Edited Volume (1)
- Doctoral Thesis (1)
Is part of the Bibliography
- yes (34)
Keywords
- Industry 4.0 (4)
- knowledge management (4)
- knowledge transfer (4)
- CPS (3)
- Analytics (2)
- CPPS (2)
- Internet of Things (2)
- Internet of things (2)
- Simulation (2)
- artificial intelligence (2)
- evaluation (2)
- knowledge transfers (2)
- product generation engineering (2)
- AI and business informatics (1)
- AI-based decision support system (1)
- Analytic Infrastructures (1)
- Architecture concepts (1)
- Architekturkonzept (1)
- Artificial Neuronal Network (1)
- Augmented reality (1)
- Business Process (1)
- Case Study (1)
- Cloud (1)
- Cyber-Phsysische Systeme (1)
- Cyber-Physical Manufacturing Systems (1)
- Cyber-physical systems (1)
- Decentral Decision Making (1)
- Deep Learning (1)
- Enterprise Architecture (1)
- Enterprise-Resource-Planning (1)
- Erklärbarkeit (1)
- Explainability (1)
- Generalized knowledge constructin axiom (1)
- Geschäftsprozess (1)
- Gradient boosting (1)
- Industrial Analytics (1)
- Industrie 4.0 (1)
- Interpretability (1)
- Interpretierbarkeit (1)
- KI-ERP-Indikator (1)
- Knowledge (1)
- Knowledge Management (1)
- Künstliche Intelligenz (1)
- Künstliche Neuronale Netzwerke (1)
- Learning (1)
- Lernen (1)
- Measuring Efficient Task Processing (1)
- Meta-model (1)
- Modeling (1)
- Modellierung (1)
- NMDL (1)
- Optimierung (1)
- Optimization (1)
- Process (1)
- Process Management (1)
- Process modeling (1)
- Prozess (1)
- Prozessmanagement (1)
- Simulation process building (1)
- Task realization strategies (1)
- Use cases Morphologic box (1)
- Wissen (1)
- Wissensmanagement (1)
- adaptable software systems (1)
- adaptive performance (1)
- artificial neural networks (1)
- augmented reality (1)
- backpropagation (1)
- behavioral patterns (1)
- business process (1)
- business process improvement (1)
- business process optimization (1)
- business processes (1)
- case-based reasoning (1)
- change (1)
- chronic obstructive pulmonary disease (1)
- claims data (1)
- cognitive levels (1)
- context-aware computing (1)
- conversion (1)
- cooperative AI (human-in-the-loop) (1)
- cyber-physical production systems (1)
- cyber-physical systems (1)
- data mining (1)
- design of knowledge-driven systems (1)
- development of AI-based systems (1)
- disease trajectory (1)
- distributed knowledge base (1)
- domain-specific language (1)
- empirical evaluation (1)
- empirical examination (1)
- empirical studies (1)
- experiment (1)
- experimentation (1)
- explainability (1)
- geographical distribution (1)
- human-machine-interaction (1)
- hybrid simulation (1)
- improvement (1)
- industry 4.0 (1)
- intentional forgetting (1)
- intervention (1)
- interventions (1)
- knowledge (1)
- knowledge crystallization (1)
- knowledge engineering (1)
- knowledge transfer velocity (1)
- learning (1)
- manufacturing systems (1)
- modeling language (1)
- morphologic box (1)
- neural networks (1)
- neuronal systems (1)
- new product development (1)
- process design (1)
- process improvement (1)
- process modelling (1)
- process perspective (1)
- process simulation (1)
- process-oriented knowledge acquisition (1)
- product development (1)
- production engineering computing (1)
- production networks (1)
- prototype (1)
- quality (1)
- quantitative (1)
- research challenges (1)
- retentivity (1)
- routines (1)
- rype of change content (1)
- second-order conditioning (1)
- simulation (1)
- smart automation (1)
- smart production (1)
- stability selection (1)
- symbiotic system design (1)
- task realization strategies (1)
- time consumption (1)
- unlearning (1)
- various applications (1)
Objective We propose a data-driven method to detect temporal patterns of disease progression in high-dimensional claims data based on gradient boosting with stability selection. Materials and methods We identified patients with chronic obstructive pulmonary disease in a German health insurance claims database with 6.5 million individuals and divided them into a group of patients with the highest disease severity and a group of control patients with lower severity. We then used gradient boosting with stability selection to determine variables correlating with a chronic obstructive pulmonary disease diagnosis of highest severity and subsequently model the temporal progression of the disease using the selected variables. Results We identified a network of 20 diagnoses (e.g. respiratory failure), medications (e.g. anticholinergic drugs) and procedures associated with a subsequent chronic obstructive pulmonary disease diagnosis of highest severity. Furthermore, the network successfully captured temporal patterns, such as disease progressions from lower to higher severity grades. Discussion The temporal trajectories identified by our data-driven approach are compatible with existing knowledge about chronic obstructive pulmonary disease showing that the method can reliably select relevant variables in a high-dimensional context. Conclusion We provide a generalizable approach for the automatic detection of disease trajectories in claims data. This could help to diagnose diseases early, identify unknown risk factors and optimize treatment plans.
Künstliche Intelligenz ist in aller Munde. Immer mehr Anwendungsbereiche werden durch die Auswertung von vorliegenden Daten mit Algorithmen und Frameworks z.B. des Maschinellen Lernens erschlossen. Dieses Buch hat das Ziel, einen Überblick über gegenwärtig vorhandene Lösungen zu geben und darüber hinaus konkrete Hilfestellung bei der Auswahl von Algorithmen oder Tools bei spezifischen Problemstellungen zu bieten. Um diesem Anspruch gerecht zu werden, wurden 90 Lösungen mittels einer systematischen Literaturrecherche und Praxissuche identifiziert sowie anschließend klassifiziert. Mit Hilfe dieses Buches gelingt es, schnell die notwendigen Grundlagen zu verstehen, gängige Anwendungsgebiete zu identifizieren und den Prozess zur Auswahl eines passenden ML-Tools für das eigene Projekt systematisch zu meistern.
Purpose
The purpose of this study was to investigate work-related adaptive performance from a longitudinal process perspective. This paper clustered specific behavioral patterns following the introduction of a change and related them to retentivity as an individual cognitive ability. In addition, this paper investigated whether the occurrence of adaptation errors varied depending on the type of change content.
Design/methodology/approach
Data from 35 participants collected in the simulated manufacturing environment of a Research and Application Center Industry 4.0 (RACI) were analyzed. The participants were required to learn and train a manufacturing process in the RACI and through an online training program. At a second measurement point in the RACI, specific manufacturing steps were subject to change and participants had to adapt their task execution. Adaptive performance was evaluated by counting the adaptation errors.
Findings
The participants showed one of the following behavioral patterns: (1) no adaptation errors, (2) few adaptation errors, (3) repeated adaptation errors regarding the same actions, or (4) many adaptation errors distributed over many different actions. The latter ones had a very low retentivity compared to the other groups. Most of the adaptation errors were made when new actions were added to the manufacturing process.
Originality/value
Our study adds empirical research on adaptive performance and its underlying processes. It contributes to a detailed understanding of different behaviors in change situations and derives implications for organizational change management.
Purpose
With shorter product cycles and a growing number of knowledge-intensive business processes, time consumption is a highly relevant target factor in measuring the performance of contemporary business processes. This research aims to extend prior research on the effects of knowledge transfer velocity at the individual level by considering the effect of complexity, stickiness, competencies, and further demographic factors on knowledge-intensive business processes at the conversion-specific levels.
Design/methodology/approach
We empirically assess the impact of situation-dependent knowledge transfer velocities on time consumption in teams and individuals. Further, we issue the demographic effect on this relationship. We study a sample of 178 experiments of project teams and individuals applying ordinary least squares (OLS) for regression analysis-based modeling.
Findings
The authors find that time consumed at knowledge transfers is negatively associated with the complexity of tasks. Moreover, competence among team members has a complementary effect on this relationship and stickiness retards knowledge transfers. Thus, while demographic factors urgently need to be considered for effective and speedy knowledge transfers, these influencing factors should be addressed on a conversion-specific basis so that some tasks are realized in teams best while others are not. Guidelines and interventions are derived to identify best task realization variants, so that process performance is improved by a new kind of process improvement method.
Research limitations/implications
This study establishes empirically the importance of conversion-specific influence factors and demographic factors as drivers of high knowledge transfer velocities in teams and among individuals. The contribution connects the field of knowledge management to important streams in the wider business literature: process improvement, management of knowledge resources, design of information systems, etc. Whereas the model is highly bound to the experiment tasks, it has high explanatory power and high generalizability to other contexts.
Practical implications
Team managers should take care to allow the optimal knowledge transfer situation within the team. This is particularly important when knowledge sharing is central, e.g. in product development and consulting processes. If this is not possible, interventions should be applied to the individual knowledge transfer situation to improve knowledge transfers among team members.
Social implications
Faster and more effective knowledge transfers improve the performance of both commercial and non-commercial organizations. As nowadays, the individual is faced with time pressure to finalize tasks, the deliberated increase of knowledge transfer velocity is a core capability to realize this goal. Quantitative knowledge transfer models result in more reliable predictions about the duration of knowledge transfers. These allow the target-oriented modification of knowledge transfer situations so that processes speed up, private firms are more competitive and public services are faster to citizens.
Originality/value
Time consumption is an increasingly relevant factor in contemporary business but so far not been explored in experiments at all. This study extends current knowledge by considering quantitative effects on knowledge velocity and improved knowledge transfers.
The paper deals with the increasing growth of embedded systems and their role within structures similar to the Internet (Internet of Things) as those that provide calculating power and are more or less appropriate for analytical tasks. Faced with the example of a cyber-physical manufacturing system, a common objective function is developed with the intention to measure efficient task processing within analytical infrastructures. A first validation is realized on base of an expert panel.
With the further development of more and more production machines into cyber-physical systems, and their greater integration with artificial intelligence (AI) techniques, the coordination of intelligent systems is a highly relevant target factor for the operation and improvement of networked processes, such as they can be found in cross-organizational production contexts spanning multiple distributed locations. This work aims to extend prior research on managing their artificial knowledge transfers as coordination instrument by examining effects of different activation types (respective activation rates and cycles) on by Artificial Neural Network (ANN)-instructed production machines. For this, it provides a new integration type of ANN-based cyber-physical production system as a tool to research artificial knowledge transfers: In a design-science-oriented way, a prototype of a simulation system is constructed as Open Source information system which will be used in on-building research to (I) enable research on ANN activation types in production networks, (II) illustrate ANN-based production networks disrupted by activation types and clarify the need for harmonizing them, and (III) demonstrate conceptual management interventions. This simulator shall establish the importance of site-specific coordination mechanisms and novel forms of management interventions as drivers of efficient artificial knowledge transfer.
Faced with the triad of time-cost-quality, the realization of knowledge-intensive tasks at economic conditions is not trivial. Since the number of knowledge-intensive processes is increasing more and more nowadays, the efficient design of knowledge transfers at business processes as well as the target-oriented improvement of them is essential, so that process outcomes satisfy high quality criteria and economic requirements. This particularly challenges knowledge management, aiming for the assignment of ideal manifestations of influence factors on knowledge transfers to a certain task. Faced with first attempts of knowledge transfer-based process improvements [1], this paper continues research about the quantitative examination of knowledge transfers and presents a ready-to-go experiment design that is able to examine quality of knowledge transfers empirically and is suitable to examine knowledge transfers on a quantitative level. Its use is proven by the example of four influence factors, which namely are stickiness, complexity, competence and time pressure.
The collaboration during the modeling process is uncomfortable and characterized by various limitations. Faced with the successful transfer of first process modeling languages to the augmented world, non-transparent processes can be visualized in a more comprehensive way. With the aim to rise comfortability, speed, accuracy and manifoldness of real world process augmentations, a framework for the bidirectional interplay of the common process modeling world and the augmented world has been designed as morphologic box. Its demonstration proves the working of drawn AR integrations. Identified dimensions were derived from (1) a designed knowledge construction axiom, (2) a designed meta-model, (3) designed use cases and (4) designed directional interplay modes. Through a workshop-based survey, the so far best AR modeling configuration is identified, which can serve for benchmarks and implementations.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
Manufacturing Analytics
(2018)
Traditionally, business models and software designs used to model the usage of artificial intelligence (AI) at a very specific point in the process or rather fix implemented application. Since applications can be based on AI, such as networked artificial neural networks (ANN) on top of which applications are installed, these on-top applications can be instructed directly from their underlying ANN compartments [1]. However, with the integration of several AI-based systems, their coordination is a highly relevant target factor for the operation and improvement of networked processes, such as they can be found in cross-organizational production contexts spanning multiple distributed locations. This work aims to extend prior research on managing artificial knowledge transfers among interlinked AIs as coordination instrument by examining effects of different activation types (respective activation rates and cycles) on by ANN-instructed production machines. In a design-science-oriented way, this paper conceptualizes rhythmic state descriptions for dynamic systems and associated 14 experiment designs. Two experiments have been realized, analyzed and evaluated thereafter in regard with their activities and processes induced. Findings show that the simulator [2] used and experiments designed and realized, here, (I) enable research on ANN activation types, (II) illustrate ANN-based production networks disrupted by activation types and clarify the need for harmonizing them. Further, (III) management interventions are derived for harmonizing interlinked ANNs. This study establishes the importance of site-specific coordination mechanisms and novel forms of management interventions as drivers of efficient artificial knowledge transfer.
Since more and more business tasks are enabled by Artificial Intelligence (AI)-based techniques, the number of knowledge-intensive tasks increase as trivial tasks can be automated and non-trivial tasks demand human-machine interactions. With this, challenges regarding the management of knowledge workers and machines rise [9]. Furthermore, knowledge workers experience time pressure, which can lead to a decrease in output quality. Artificial Intelligence-based systems (AIS) have the potential to assist human workers in knowledge-intensive work. By providing a domain-specific language, contextual and situational awareness as well as their process embedding can be specified, which enables the management of human and AIS to ease knowledge transfer in a way that process time, cost and quality are improved significantly. This contribution outlines a framework to designing these systems and accounts for their implementation.
As part of the digitization, the role of artificial systems as new actors in knowledge-intensive processes requires to recognize them as a new form of knowledge bearers side by side with traditional knowledge bearers, such as individuals, groups, organizations. By now, artificial intelligence (AI) methods were used in knowledge management (KM) for knowledge discovery, for the reinterpreting of information, and recent works focus on the studying of different AI technologies implementation for knowledge management, like big data, ontology-based methods and intelligent agents [1]. However, a lack of holistic management approach is present, that considers artificial systems as knowledge bearers. The paper therefore designs a new kind of KM approach, that integrates the technical level of knowledge and manifests as Neuronal KM (NKM). Superimposing traditional KM approaches with the NKM, the Symbiotic Knowledge Management (SKM) is conceptualized furthermore, so that human as well as artificial kinds of knowledge bearers can be managed as symbiosis. First use cases demonstrate the new KM, NKM and SKM approaches in a proof-of-concept and exemplify their differences.
With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
The idea of the continuous improvement process (CIP) helps companies to continuously improve their operation and thereby contributes to their competitiveness. Through digi tization, new potentials emerge to solve known CIP issues. This contribution specifically addresses the individual motivation of employees to contribute to the CIP. Typically, related initiatives lack contributions over time. The use of gamification is a promising way to achieve continuous participation by addressing the individual needs of participants. While the use of extrinsic motivation elements is common in practice, the idea of this approach is to specifically address intrinsic motivations which serve as a long-term motivator. This article contributes to a gam-ification concept for the continuous improvement process. The main results include an adapted CIP, a gamification concept, and a market mechanism. Furthermore, the concept is implemented and demonstrated as a prototype in an online platform.
Already successfully used products or designs, past projects or our own experiences can be the basis for the development of new products. As reference products or existing knowledge, it is reused in the development process and across generations of products. Since further, products are developed in cooperation, the development of new product generations is characterized by knowledge-intensive processes in which information and knowledge are exchanged between different kinds of knowledge carriers. The particular knowledge transfer here describes the identification of knowledge, its transmission from the knowledge carrier to the knowledge receiver, and its application by the knowledge receiver, which includes embodied knowledge of physical products. Initial empirical findings of the quantitative effects regarding the speed of knowledge transfers already have been examined. However, the factors influencing the quality of knowledge transfer to increase the efficiency and effectiveness of knowledge transfer in product development have not yet been examined empirically. Therefore, this paper prepares an experimental setting for the empirical investigation of the quality of knowledge transfers.
Künstliche Intelligenz (KI) gewinnt in zahlreichen Branchen rasant an Bedeutung und wird zunehmend auch in Enterprise Resource Planning (ERP)-Systemen als Anwendungsbereich erschlossen. Die Idee, dass Maschinen die kognitiven Fähigkeiten des Menschen imitieren können, indem Wissen durch Lernen auf Basis von Beispielen in Daten, Informationen und Erfahrungen generiert wird, ist heute ein Schlüsselelement der digitalen Transformation. Jedoch charakterisiert der Einsatz von KI in ERP-System einen hohen Komplexitätsgrad, da die KI als Querschnittstechnologie zu verstehen ist, welche in unterschiedlichen Unternehmensbereichen zum Einsatz kommen kann. Auch die Anwendungsgrade können sich dabei erheblich voneinander unterscheiden. Um trotz dieser Komplexität den Einsatz der KI in ERP-Systemen erfassen und systembezogen vergleichen zu können, wurde im Rahmen dieser Studie ein Reifegradmodell entwickelt. Dieses bildet die Ausgangsbasis zur Ermittlung der KI-Reife in ERP-Systemen und grenzt dabei die folgenden vier KI- bzw. systembezogenen Ebenen voneinander ab: 1) Technische Möglichkeiten, 2) Datenreife, 3) Funktionsreife und 4) Erklärfähigkeit des Systems.
Die optimale Dimensionierung von IT-Hardware stellt Entscheider aufgrund der stetigen Weiterentwicklung zunehmend vor Herausforderungen. Dies gilt im Speziellen auch für Analytics-Infrastrukturen, die zunehmend auch neue Software zur Analyse von Daten einsetzen, welche in den Ressourcenanforderungen stark variieren. Damit eine flexible und gleichzeitig effiziente Gestaltung von Analytics-Infrastrukturen erreicht werden kann, wird ein dynamisch arbeitendes Architekturkonzept vorgeschlagen, das Aufgaben auf Basis einer systemspezifischen Entscheidungsmaxime mit Hilfe einer Eskalationsmatrix verteilt und hierfür Aufgabencharakteristiken sowie verfügbare Hardwareausstattungen entsprechend ihrer Auslastung berücksichtigt.