Refine
Has Fulltext
- no (24)
Document Type
- Conference Proceeding (11)
- Part of a Book (8)
- Article (3)
- Monograph/Edited Volume (1)
- Other (1)
Is part of the Bibliography
- yes (24)
Keywords
Institute
- Fachgruppe Betriebswirtschaftslehre (24) (remove)
Objective We propose a data-driven method to detect temporal patterns of disease progression in high-dimensional claims data based on gradient boosting with stability selection. Materials and methods We identified patients with chronic obstructive pulmonary disease in a German health insurance claims database with 6.5 million individuals and divided them into a group of patients with the highest disease severity and a group of control patients with lower severity. We then used gradient boosting with stability selection to determine variables correlating with a chronic obstructive pulmonary disease diagnosis of highest severity and subsequently model the temporal progression of the disease using the selected variables. Results We identified a network of 20 diagnoses (e.g. respiratory failure), medications (e.g. anticholinergic drugs) and procedures associated with a subsequent chronic obstructive pulmonary disease diagnosis of highest severity. Furthermore, the network successfully captured temporal patterns, such as disease progressions from lower to higher severity grades. Discussion The temporal trajectories identified by our data-driven approach are compatible with existing knowledge about chronic obstructive pulmonary disease showing that the method can reliably select relevant variables in a high-dimensional context. Conclusion We provide a generalizable approach for the automatic detection of disease trajectories in claims data. This could help to diagnose diseases early, identify unknown risk factors and optimize treatment plans.
The paper deals with the increasing growth of embedded systems and their role within structures similar to the Internet (Internet of Things) as those that provide calculating power and are more or less appropriate for analytical tasks. Faced with the example of a cyber-physical manufacturing system, a common objective function is developed with the intention to measure efficient task processing within analytical infrastructures. A first validation is realized on base of an expert panel.
Faced with the triad of time-cost-quality, the realization of knowledge-intensive tasks at economic conditions is not trivial. Since the number of knowledge-intensive processes is increasing more and more nowadays, the efficient design of knowledge transfers at business processes as well as the target-oriented improvement of them is essential, so that process outcomes satisfy high quality criteria and economic requirements. This particularly challenges knowledge management, aiming for the assignment of ideal manifestations of influence factors on knowledge transfers to a certain task. Faced with first attempts of knowledge transfer-based process improvements [1], this paper continues research about the quantitative examination of knowledge transfers and presents a ready-to-go experiment design that is able to examine quality of knowledge transfers empirically and is suitable to examine knowledge transfers on a quantitative level. Its use is proven by the example of four influence factors, which namely are stickiness, complexity, competence and time pressure.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
Since more and more business tasks are enabled by Artificial Intelligence (AI)-based techniques, the number of knowledge-intensive tasks increase as trivial tasks can be automated and non-trivial tasks demand human-machine interactions. With this, challenges regarding the management of knowledge workers and machines rise [9]. Furthermore, knowledge workers experience time pressure, which can lead to a decrease in output quality. Artificial Intelligence-based systems (AIS) have the potential to assist human workers in knowledge-intensive work. By providing a domain-specific language, contextual and situational awareness as well as their process embedding can be specified, which enables the management of human and AIS to ease knowledge transfer in a way that process time, cost and quality are improved significantly. This contribution outlines a framework to designing these systems and accounts for their implementation.
As part of the digitization, the role of artificial systems as new actors in knowledge-intensive processes requires to recognize them as a new form of knowledge bearers side by side with traditional knowledge bearers, such as individuals, groups, organizations. By now, artificial intelligence (AI) methods were used in knowledge management (KM) for knowledge discovery, for the reinterpreting of information, and recent works focus on the studying of different AI technologies implementation for knowledge management, like big data, ontology-based methods and intelligent agents [1]. However, a lack of holistic management approach is present, that considers artificial systems as knowledge bearers. The paper therefore designs a new kind of KM approach, that integrates the technical level of knowledge and manifests as Neuronal KM (NKM). Superimposing traditional KM approaches with the NKM, the Symbiotic Knowledge Management (SKM) is conceptualized furthermore, so that human as well as artificial kinds of knowledge bearers can be managed as symbiosis. First use cases demonstrate the new KM, NKM and SKM approaches in a proof-of-concept and exemplify their differences.
With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
The idea of the continuous improvement process (CIP) helps companies to continuously improve their operation and thereby contributes to their competitiveness. Through digi tization, new potentials emerge to solve known CIP issues. This contribution specifically addresses the individual motivation of employees to contribute to the CIP. Typically, related initiatives lack contributions over time. The use of gamification is a promising way to achieve continuous participation by addressing the individual needs of participants. While the use of extrinsic motivation elements is common in practice, the idea of this approach is to specifically address intrinsic motivations which serve as a long-term motivator. This article contributes to a gam-ification concept for the continuous improvement process. The main results include an adapted CIP, a gamification concept, and a market mechanism. Furthermore, the concept is implemented and demonstrated as a prototype in an online platform.
Already successfully used products or designs, past projects or our own experiences can be the basis for the development of new products. As reference products or existing knowledge, it is reused in the development process and across generations of products. Since further, products are developed in cooperation, the development of new product generations is characterized by knowledge-intensive processes in which information and knowledge are exchanged between different kinds of knowledge carriers. The particular knowledge transfer here describes the identification of knowledge, its transmission from the knowledge carrier to the knowledge receiver, and its application by the knowledge receiver, which includes embodied knowledge of physical products. Initial empirical findings of the quantitative effects regarding the speed of knowledge transfers already have been examined. However, the factors influencing the quality of knowledge transfer to increase the efficiency and effectiveness of knowledge transfer in product development have not yet been examined empirically. Therefore, this paper prepares an experimental setting for the empirical investigation of the quality of knowledge transfers.
Künstliche Intelligenz (KI) gewinnt in zahlreichen Branchen rasant an Bedeutung und wird zunehmend auch in Enterprise Resource Planning (ERP)-Systemen als Anwendungsbereich erschlossen. Die Idee, dass Maschinen die kognitiven Fähigkeiten des Menschen imitieren können, indem Wissen durch Lernen auf Basis von Beispielen in Daten, Informationen und Erfahrungen generiert wird, ist heute ein Schlüsselelement der digitalen Transformation. Jedoch charakterisiert der Einsatz von KI in ERP-System einen hohen Komplexitätsgrad, da die KI als Querschnittstechnologie zu verstehen ist, welche in unterschiedlichen Unternehmensbereichen zum Einsatz kommen kann. Auch die Anwendungsgrade können sich dabei erheblich voneinander unterscheiden. Um trotz dieser Komplexität den Einsatz der KI in ERP-Systemen erfassen und systembezogen vergleichen zu können, wurde im Rahmen dieser Studie ein Reifegradmodell entwickelt. Dieses bildet die Ausgangsbasis zur Ermittlung der KI-Reife in ERP-Systemen und grenzt dabei die folgenden vier KI- bzw. systembezogenen Ebenen voneinander ab: 1) Technische Möglichkeiten, 2) Datenreife, 3) Funktionsreife und 4) Erklärfähigkeit des Systems.
Die optimale Dimensionierung von IT-Hardware stellt Entscheider aufgrund der stetigen Weiterentwicklung zunehmend vor Herausforderungen. Dies gilt im Speziellen auch für Analytics-Infrastrukturen, die zunehmend auch neue Software zur Analyse von Daten einsetzen, welche in den Ressourcenanforderungen stark variieren. Damit eine flexible und gleichzeitig effiziente Gestaltung von Analytics-Infrastrukturen erreicht werden kann, wird ein dynamisch arbeitendes Architekturkonzept vorgeschlagen, das Aufgaben auf Basis einer systemspezifischen Entscheidungsmaxime mit Hilfe einer Eskalationsmatrix verteilt und hierfür Aufgabencharakteristiken sowie verfügbare Hardwareausstattungen entsprechend ihrer Auslastung berücksichtigt.
As Industry 4.0 infrastructures are seen as highly evolutionary environment with volatile, and time-dependent workloads for analytical tasks, particularly the optimal dimensioning of IT hardware is a challenge for decision makers because the digital processing of these tasks can be decoupled from their physical place of origin. Flexible architecture models to allocate tasks efficiently with regard to multi-facet aspects and a predefined set of local systems and external cloud services have been proven in small example scenarios. This paper provides a benchmark of existing task realization strategies, composed of (1) task distribution and (2) task prioritization in a real-world scenario simulation. It identifies heuristics as superior strategies.
Traditional production systems are enhanced by cyber-physical systems (CPS) and Internet of Things. A kind of next generation systems, those cyber-physical production systems (CPPS) are able to raise the level of autonomy of its production components. To find the optimal degree of autonomy in a given context, a research approach is formulated using a simulation concept. Based on requirements and assumptions, a cyber-physical market is modeled and qualitative hypotheses are formulated, which will be verified with the help of the CPPS of a hybrid simulation environment.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Context-aware, intelligent musical instruments for improving knowledge-intensive business processes
(2022)
With shorter song publication cycles in music industries and a reduced number of physical contact opportunities because of disruptions that may be an obstacle for musicians to cooperate, collaborative time consumption is a highly relevant target factor providing a chance for feedback in contemporary music production processes. This work aims to extend prior research on knowledge transfer velocity by augmenting traditional designs of musical instruments with (I) Digital Twins, (II) Internet of Things and (III) Cyber-Physical System capabilities and consider a new type of musical instrument as a tool to improve knowledge transfers at knowledge-intensive forms of business processes. In a design-science-oriented way, a prototype of a sensitive guitar is constructed as information and cyber-physical system. Findings show that this intelligent SensGuitar increases feedback opportunities. This study establishes the importance of conversion-specific music production processes and novel forms of interactions at guitar playing as drivers of high knowledge transfer velocities in teams and among individuals.
The business problem of having inefficient processes, imprecise process analyses and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS) and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.