Refine
Year of publication
Document Type
- Part of a Book (242) (remove)
Language
- English (242) (remove)
Keywords
- Germany (4)
- digitalization (3)
- knowledge management (3)
- territorial reforms (3)
- administrative reform (2)
- artificial intelligence (2)
- augmented reality (2)
- browser platforms (2)
- digital learning (2)
- digital platforms (2)
Institute
- Fachgruppe Politik- & Verwaltungswissenschaft (39)
- Fachgruppe Betriebswirtschaftslehre (27)
- Institut für Anglistik und Amerikanistik (24)
- Öffentliches Recht (23)
- Sozialwissenschaften (20)
- Historisches Institut (15)
- Wirtschaftswissenschaften (14)
- Institut für Philosophie (8)
- Institut für Jüdische Studien und Religionswissenschaft (7)
- Institut für Romanistik (7)
Process models are the basic ingredient for many attempts to improve business processes. The graphical depiction of otherwise not observable behavior in an enterprise is one of the most important techniques in the digital society. They help to enable decision making in the design of processes and workflows. Nevertheless it is not easy to correctly model business processes. Some approaches try to detect errors by an automated analysis of the process model. This contribution focuses on the creation of the first model from scratch. Which errors occur most frequently and how can these be avoided?
Faced with the triad of time-cost-quality, the realization of knowledge-intensive tasks at economic conditions is not trivial. Since the number of knowledge-intensive processes is increasing more and more nowadays, the efficient design of knowledge transfers at business processes as well as the target-oriented improvement of them is essential, so that process outcomes satisfy high quality criteria and economic requirements. This particularly challenges knowledge management, aiming for the assignment of ideal manifestations of influence factors on knowledge transfers to a certain task. Faced with first attempts of knowledge transfer-based process improvements [1], this paper continues research about the quantitative examination of knowledge transfers and presents a ready-to-go experiment design that is able to examine quality of knowledge transfers empirically and is suitable to examine knowledge transfers on a quantitative level. Its use is proven by the example of four influence factors, which namely are stickiness, complexity, competence and time pressure.
Industry 4.0, i.e. the connection of cyber-physical systems via the Internet in production and logistics, leads to considerable changes in the socio-technical system of the factory. The effects range from a considerable need for further training, which is exacerbated by the current shortage of skilled workers, to an opening of the previously inaccessible boundaries of the factory to third-party access, an increasing merging of office IT and manufacturing IT, and a new understanding of what machines can do with their data. This results in new requirements for the modeling, analysis and design of information processing and performance mapping business processes.
In the past, procedures were developed under the name of “process-oriented knowledge management” with which the exchange and use of knowledge in business processes could be represented, analyzed and improved. However, these approaches were limited to the office environment. A method that makes it possible to document, analyze and jointly optimize the new possibilities of knowledge processing by using artificial intelligence and machine learning in production and logistics in the same way and in a manner compatible with the approach in the office environment does not exist so far. The extension of the modeling language KMDL, which is described in this paper, will contribute to close this research gap.
This paper describes first approaches for an analysis and design method for a knowledge management integrating man and machine in the age of Industry 4.0.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
From employee to expert
(2021)
In the context of the collaborative project Ageing-appropriate, process-oriented and interactive further training in SME (API-KMU), innovative solutions for the challenges of demographic change and digitalisation are being developed for SMEs. To this end, an approach to age-appropriate training will be designed with the help of AR technology. In times of the corona pandemic, a special research design is necessary for the initial survey of the current state in the companies, which will be systematically elaborated in this paper. The results of the previous methodological considerations illustrate the necessity of a mix of methods to generate a deeper insight into the work processes. Video-based retrospective interviews seem to be a suitable instrument to adequately capture the employees' interpretative perspectives on their work activities. In conclusion, the paper identifies specific challenges, such as creating acceptance among employees, open questions, e.g., how a transfer or generalization of the results can succeed, and hypotheses that will have to be tested in the further course of the research process.
The authors propose that while tacit knowledge is a valuable resource for developing new business models, its externalization presents several challenges. One major challenge is that individuals often don’t recognize their tacit knowledge resources, while another is the reluctance to share one’s knowledge with others. Addressing these challenges, the authors present an application-oriented serious game-based haptic modeling approach for externalize tacit knowledge, which can be used to develop the first versions of business models based on tacit knowledge. Both conceptual and practical design fundamentals are presented based on elaborated theoretical approaches, which were developed with the help of a design science approach. The development of the research process is presented step by step, whereby we focused on the high accessibility of the presented research. Practitioners are presented with guidelines for implementing their serious game projects. Scientists benefit from starting points for their research topics of externalization, internalization, and socialization of tacit knowledge, development of business models, and serious games or gamification. The paper concludes with open research desiderata and questions from the presented research process.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.