Refine
Year of publication
Document Type
- Article (47)
- Conference Proceeding (20)
- Part of a Book (14)
- Postprint (7)
- Other (6)
- Monograph/Edited Volume (3)
- Review (1)
Language
- English (98) (remove)
Is part of the Bibliography
- yes (98)
Keywords
- knowledge management (6)
- Industry 4.0 (5)
- deep reinforcement learning (5)
- production control (5)
- digital learning (4)
- machine learning (4)
- systematic literature review (4)
- COVID-19 (3)
- CPPS (3)
- CPS (3)
- JSP (3)
- business processes (3)
- evaluation (3)
- knowledge transfer (3)
- learning factories (3)
- modular production (3)
- multi-agent system (3)
- neural networks (3)
- Internet of things (2)
- Simulation (2)
- TAM (2)
- action problems (2)
- augmented reality (2)
- change management (2)
- deep learning (2)
- discipline differences (2)
- e-learning (2)
- industry 4.0 (2)
- intentional forgetting (2)
- job shop scheduling (2)
- knowledge transfers (2)
- learning factory (2)
- method comparision (2)
- multi-actor routines (2)
- multi-objective optimisation (2)
- organizational memory (2)
- process modelling (2)
- product generation engineering (2)
- production planning (2)
- production planning and control (2)
- serious game (2)
- situational strength (2)
- social network analysis (2)
- tacit knowledge (2)
- taxonomy (2)
- technology acceptance (2)
- technology-mediated teaching (2)
- university teaching (2)
- vocational training (2)
- 4th industrial revolution (1)
- AI and business informatics (1)
- AI-based decision support system (1)
- Analytics (1)
- Architecture concepts (1)
- Audit (1)
- Augmented reality (1)
- Case Study (1)
- Coring (1)
- Cross-System (1)
- Customization (1)
- Cyber-phyiscal system (1)
- Cyber-physical systems (1)
- Decentral Decision Making (1)
- Decentralized production control (1)
- Degree of autonomy (1)
- Digital Learning Factory (1)
- Digital Marketplaces (1)
- Digital Platforms (1)
- Digitization (1)
- ERP (1)
- ERP system (1)
- Enterprise Resource Planning (1)
- Enterprise Resource Planning (ERP) System (1)
- Enterprise System (1)
- Factory operating system (1)
- Generalized knowledge constructin axiom (1)
- Hinweisreize (1)
- Industrial Analytics (1)
- Industrial IoT Competences (1)
- Internet of Things (1)
- Learning Factory (1)
- Literature Review (1)
- Meta-model (1)
- Mobile Software Ecosystems (1)
- Modification (1)
- Open innovation (1)
- Problems (1)
- Process Mining (1)
- Process modeling (1)
- Production (1)
- Production system (1)
- Produktions-Routine (1)
- RFID (1)
- Research Agenda (1)
- Roadmap (1)
- SECI-model (1)
- Simulation process building (1)
- Student Training (1)
- Subject-oriented learning (1)
- Tailoring (1)
- Task realization strategies (1)
- Three-tier Architecture (1)
- Use cases Morphologic box (1)
- Vocational Training (1)
- adaptable software systems (1)
- adaptive performance (1)
- advances in teaching and learning technologies (1)
- age-appropriate competence development (1)
- age-appropriate vocational training (1)
- artificial intelligence (1)
- assessment (1)
- behavioral patterns (1)
- benefits (1)
- big data analytics (1)
- business model (1)
- business models (1)
- business process improvement (1)
- business process management (1)
- business process modeling (1)
- business process optimization (1)
- case-based reasoning (1)
- change (1)
- context-aware computing (1)
- conversion (1)
- conversion sequences (1)
- cooperative AI (human-in-the-loop) (1)
- copyright (1)
- coring (1)
- corona-sensitive data collection (1)
- creativity training (1)
- cross self-confrontation (1)
- cyber-physical production systems (1)
- cyber-physical systems (1)
- data mining (1)
- data-driven artifacts (1)
- database (1)
- databases (1)
- decision-making (1)
- demographic change (1)
- design science (1)
- design-science research (1)
- development of AI-based systems (1)
- didactic concept (1)
- didactic framework (1)
- digital marketplaces (1)
- digital platform openness (1)
- digital platforms (1)
- digital teaching (1)
- discrete event simulation (1)
- distributed knowledge base (1)
- domain-specific language (1)
- eference Architecture Model (1)
- effectiveness (1)
- empirical evaluation (1)
- empirical examination (1)
- empirical studies (1)
- enhancement (1)
- enteprise-level (1)
- enterprise system (1)
- enterprise systems (1)
- errors in modeling (1)
- experience; (1)
- experiment (1)
- experimental design (1)
- experimentation (1)
- explainability (1)
- federated industrial platform ecosystems (1)
- future (1)
- game-based learning (1)
- gamification (1)
- geographical distribution (1)
- higher education (1)
- human-machine-interaction (1)
- humans-in-the-loop (1)
- hybrid simulation (1)
- improvement (1)
- information systems research (1)
- intentionales Vergessen (1)
- intermediaries (1)
- internet of things and services (1)
- intervention (1)
- interventions (1)
- job-shop scheduling (1)
- knowledge (1)
- knowledge engineering (1)
- knowledge transfer velocity (1)
- learning (1)
- learning environment (1)
- learning scenario for manufacturing (1)
- learning scenario implementation (1)
- manipulation (1)
- manufacturing systems (1)
- metadata (1)
- mobile software ecosystems (1)
- modeling language (1)
- morphologic box (1)
- morphological analysis (1)
- music industry (1)
- new product development (1)
- notation (1)
- personalised learning (1)
- problems (1)
- process design (1)
- process improvement (1)
- process of modeling (1)
- process perspective (1)
- process simulation (1)
- process-oriented knowledge acquisition (1)
- product development (1)
- production engineering computing (1)
- production networks (1)
- production routine (1)
- programming skills (1)
- quality (1)
- quantitative (1)
- recording of workplaces (1)
- regional network (1)
- remanufacturing (1)
- requirements (1)
- research challenges (1)
- retentivity (1)
- retrieval cues (1)
- retrofit (1)
- risks (1)
- routines (1)
- rype of change content (1)
- scenario modeling (1)
- simulation (1)
- smart automation (1)
- smart grid (1)
- smart production (1)
- software engineering (1)
- standardization (1)
- subject differences (1)
- subject-oriented learning (1)
- sustainability (1)
- task realization strategies (1)
- teaching and learning model (1)
- technologies (1)
- terminology (1)
- time consumption (1)
- training (1)
- triple bottom line (1)
- unlearning (1)
- various applications (1)
- virtual learning (1)
Institute
- Fachgruppe Betriebswirtschaftslehre (54)
- Wirtschaftswissenschaften (36)
- Hasso-Plattner-Institut für Digital Engineering GmbH (3)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (2)
- Department Psychologie (1)
- Fachgruppe Politik- & Verwaltungswissenschaft (1)
- Forschungsbereich „Politik, Verwaltung und Management“ (1)
- Sozialwissenschaften (1)
The Knowledge Modeler Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. So KMDL can be used to formalize knowledge-intensive processes with a focus on certain knowledge-specific characteristics and to identify weak points in these processes. For a computer-aided modeling and analyzing the tool K-Modeler is introduced.
This contribution presents an approach for requirement oriented team building in industrial processes like product development. This will be based on the knowledge modelling and description language (KMDL(R)) that enables the modelling and analysis of knowledge intensive business processes. First the basic elements of the modelling technique are described, presenting the concept and the description language. Furthermore it is shown how the KMDL(R) process models can be used as a basis for the team building component. Therefore, an algorithm was developed that is able to propose a team composition for a specific task by analyzing the knowledge and skills of the employees, which will be contrasted to the process requirements. This can be used as guidance for team building decisions.
Modern production infrastructures of globally operating companies usually consist of multiple distributed production sites. While the organization of individual sites consisting of Industry 4.0 components itself is demanding, new questions regarding the organization and allocation of resources emerge considering the total production network. In an attempt to face the challenge of efficient distribution and processing both within and across sites, we aim to provide a hybrid simulation approach as a first step towards optimization. Using hybrid simulation allows us to include real and simulated concepts and thereby benchmark different approaches with reasonable effort. A simulation concept is conceptualized and demonstrated qualitatively using a global multi-site example.
Existing factories face multiple problems due to their hierarchical structure of decision making and control. Cyber-physical systems principally allow to increase the degree of autonomy to new heights. But which degree of autonomy is really useful and beneficiary? This paper differentiates diverse definitions of autonomy and approaches to determine them. Some experimental findings in a lab environment help to answer the question raised in this paper.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
Audit - and then what?
(2019)
Current trends such as digital transformation, Internet of Things, or Industry 4.0 are challenging the majority of learning factories. Regardless of whether a conventional learning factory, a model factory, or a digital learning factory, traditional approaches such as the monotonous execution of specific instructions don‘t suffice the learner’s needs, market requirements as well as especially current technological developments. Contemporary teaching environments need a clear strategy, a road to follow for being able to successfully cope with the changes and develop towards digitized learning factories. This demand driven necessity of transformation leads to another obstacle: Assessing the status quo and developing and implementing adequate action plans. Within this paper, details of a maturity-based audit of the hybrid learning factory in the Research and Application Centre Industry 4.0 and a thereof derived roadmap for the digitization of a learning factory are presented.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Subject-oriented learning
(2019)
The transformation to a digitized company changes not only the work but also social context for the employees and requires inter alia new knowledge and skills from them. Additionally, individual action problems arise. This contribution proposes the subject-oriented learning theory, in which the employees´ action problems are the starting point of training activities in learning factories. In this contribution, the subject-oriented learning theory is exemplified and respective advantages for vocational training in learning factories are pointed out both theoretically and practically. Thereby, especially the individual action problems of learners and the infrastructure are emphasized as starting point for learning processes and competence development.
In the copyright industries of the 21st century, metadata is the grease required to make the engine of copyright run smoothly and powerfully for the benefit of creators, copyright industries and users alike. However, metadata is difficult to acquire and even more difficult to keep up to date as the rights in content are mostly multi-layered, fragmented, international and volatile. This article explores the idea of a neutral metadata search and enhancement tool that could constitute a buffer to safeguard the interests of the various proprietary database owners and avoid the shortcomings of centralised databases.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
Enterprise systems have long played an important role in businesses of various sizes. With the increasing complexity of today’s business relationships, specialized application systems are being used more and more. Moreover, emerging technologies such as artificial intelligence are becoming accessible for enterprise systems. This raises the question of the future role of enterprise systems. This minitrack covers novel ideas that contribute to and shape the future role of enterprise systems with five contributions.
Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment (N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
While Information Systems (IS) Research on the individual and workgroup level of analysis is omnipresent, research on the enterprise-level IS is less frequent. Even though research on Enterprise Systems and their management is established in academic associations and conference programs, enterprise-level phenomena are underrepresented. This minitrack provides a forum to integrate existing research streams that traditionally needed to be attached to other topics (such as IS management or IS governance). The minitrack received broad attention. The three selected papers address different facets of the future role of enterprise-wide IS including aspects such as carbonization, ecosystem integration, and technology-organization fit.
The implementation of learning scenarios is a diversely challenging, frequently purely manual and effortful undertaking. In this contribution a process based view is used in scenario generation to overcome communication, coordination and technical gaps. A framework is provided to identify, define and integrate technological artefacts and learning content as modular, reusable building blocks along a modeled production process. The specific contribution is twofold: 1) the theoretical framework represents a unique basis for modularization of content and technology in order to enhance reusability, 2) the model based scenario definition is a starting point for automated implementation of learning scenarios in industrial learning environments that has not been created before.
The increasing demand for software engineers cannot completely be fulfilled by university education and conventional training approaches due to limited capacities. Accordingly, an alternative approach is necessary where potential software engineers are being educated in software engineering skills using new methods. We suggest micro tasks combined with theoretical lessons to overcome existing skill deficits and acquire fast trainable capabilities. This paper addresses the gap between demand and supply of software engineers by introducing an actionoriented and scenario-based didactical approach, which enables non-computer scientists to code. Therein, the learning content is provided in small tasks and embedded in learning factory scenarios. Therefore, different requirements for software engineers from the market side and from an academic viewpoint are analyzed and synthesized into an integrated, yet condensed skills catalogue. This enables the development of training and education units that focus on the most important skills demanded on the market. To achieve this objective, individual learning scenarios are developed. Of course, proper basic skills in coding cannot be learned over night but software programming is also no sorcery.
Process analysis usually focuses only on single and selected processes. It is either existent processes that are recorded and analysed or reference processes that are implemented. So far no evident effort has been put into generalising specific process aspects into patterns and comparing those patterns with regard to their efficiency and effectiveness. This article focuses on the combination of dynamic and holistic analytical elements in enterprise architectures. Our goal is to outline an approach to analyse the development of business processes in a cyclical matter and demonstrate this approach based on an existent modelling language. We want to show that organisational learning can derive from the systematic analysis of past and existent processes from which patterns of successful problem solving can be deducted.
As AI technology is increasingly used in production systems, different approaches have emerged from highly decentralized small-scale AI at the edge level to centralized, cloud-based services used for higher-order optimizations. Each direction has disadvantages ranging from the lack of computational power at the edge level to the reliance on stable network connections with the centralized approach. Thus, a hybrid approach with centralized and decentralized components that possess specific abilities and interact is preferred. However, the distribution of AI capabilities leads to problems in self-adapting learning systems, as knowledgebases can diverge when no central coordination is present. Edge components will specialize in distinctive patterns (overlearn), which hampers their adaptability for different cases. Therefore, this paper aims to present a concept for a distributed interchangeable knowledge base in CPPS. The approach is based on various AI components and concepts for each participating node. A service-oriented infrastructure allows a decentralized, loosely coupled architecture of the CPPS. By exchanging knowledge bases between nodes, the overall system should become more adaptive, as each node can “forget” their present specialization.
Since more and more production tasks are enabled by Industry 4.0 techniques, the number of knowledge-intensive production tasks increases as trivial tasks can be automated and only non-trivial tasks demand human-machine interactions. With this, challenges regarding the competence of production workers, the complexity of tasks and stickiness of required knowledge occur [1]. Furthermore, workers experience time pressure which can lead to a decrease in output quality. Cyber-Physical Systems (CPS) have the potential to assist workers in knowledge-intensive work grounded on quantitative insights about knowledge transfer activities [2]. By providing contextual and situational awareness as well as complex classification and selection algorithms, CPS are able to ease knowledge transfer in a way that production time and quality is improved significantly. CPS have only been used for direct production and process optimization, knowledge transfers have only been regarded in assistance systems with little contextual awareness. Embedding production and knowledge transfer optimization thus show potential for further improvements. This contribution outlines the requirements and a framework to design these systems. It accounts for the relevant factors.
Business processes are regularly modified either to capture requirements from the organization’s environment or due to internal optimization and restructuring. Implementing the changes into the individual work routines is aided by change management tools. These tools aim at the acceptance of the process by and empowerment of the process executor. They cover a wide range of general factors and seldom accurately address the changes in task execution and sequence. Furthermore, change is only framed as a learning activity, while most obstacles to change arise from the inability to unlearn or forget behavioural patterns one is acquainted with. Therefore, this paper aims to develop and demonstrate a notation to capture changes in business processes and identify elements that are likely to present obstacles during change. It connects existing research from changes in work routines and psychological insights from unlearning and intentional forgetting to the BPM domain. The results contribute to more transparency in business process models regarding knowledge changes. They provide better means to understand the dynamics and barriers of change processes.
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.
Accelerating knowledge
(2019)
As knowledge-intensive processes are often carried out in teams and demand for knowledge transfers among various knowledge carriers, any optimization in regard to the acceleration of knowledge transfers obtains a great economic potential. Exemplified with product development projects, knowledge transfers focus on knowledge acquired in former situations and product generations. An adjustment in the manifestation of knowledge transfers in its concrete situation, here called intervention, therefore can directly be connected to the adequate speed optimization of knowledge-intensive process steps. This contribution presents the specification of seven concrete interventions following an intervention template. Further, it describes the design and results of a workshop with experts as a descriptive study. The workshop was used to assess the practical relevance of interventions designed as well as the identification of practical success factors and barriers of their implementation.
Already successfully used products or designs, past projects or our own experiences can be the basis for the development of new products. As reference products or existing knowledge, it is reused in the development process and across generations of products. Since further, products are developed in cooperation, the development of new product generations is characterized by knowledge-intensive processes in which information and knowledge are exchanged between different kinds of knowledge carriers. The particular knowledge transfer here describes the identification of knowledge, its transmission from the knowledge carrier to the knowledge receiver, and its application by the knowledge receiver, which includes embodied knowledge of physical products. Initial empirical findings of the quantitative effects regarding the speed of knowledge transfers already have been examined. However, the factors influencing the quality of knowledge transfer to increase the efficiency and effectiveness of knowledge transfer in product development have not yet been examined empirically. Therefore, this paper prepares an experimental setting for the empirical investigation of the quality of knowledge transfers.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
This paper presents an exploratory study investigating the influence of the factors (1) intermediary participation, (2) decision-making authority, (3) position in the enterprise, and (4) experience in open innovation on the perception and assessment of the benefits and risks expected from participating in open innovation projects. For this purpose, an online survey was conducted in Germany, Austria and Switzerland. The result of this paper is an empirical evidence showing whether and how these factors affect the perception of potential benefits and risks expected within the context of open innovation project participation. Furthermore, the identified effects are discussed against the theory. Existing theory regarding the benefits and risks of open innovation is expanded by (1) finding that they are perceived mostly independently of the factors, (2) confirming the practical relevance of benefits and risks, and (3) enabling a finer distinction between their degrees of relevance according to respective contextual specifics.
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
Nowadays, production planning and control must cope with mass customization, increased fluctuations in demand, and high competition pressures. Despite prevailing market risks, planning accuracy and increased adaptability in the event of disruptions or failures must be ensured, while simultaneously optimizing key process indicators. To manage that complex task, neural networks that can process large quantities of high-dimensional data in real time have been widely adopted in recent years. Although these are already extensively deployed in production systems, a systematic review of applications and implemented agent embeddings and architectures has not yet been conducted. The main contribution of this paper is to provide researchers and practitioners with an overview of applications and applied embeddings and to motivate further research in neural agent-based production. Findings indicate that neural agents are not only deployed in diverse applications, but are also increasingly implemented in multi-agent environments or in combination with conventional methods — leveraging performances compared to benchmarks and reducing dependence on human experience. This not only implies a more sophisticated focus on distributed production resources, but also broadening the perspective from a local to a global scale. Nevertheless, future research must further increase scalability and reproducibility to guarantee a simplified transfer of results to reality.
Nowadays, production planning and control must cope with mass customization, increased fluctuations in demand, and high competition pressures. Despite prevailing market risks, planning accuracy and increased adaptability in the event of disruptions or failures must be ensured, while simultaneously optimizing key process indicators. To manage that complex task, neural networks that can process large quantities of high-dimensional data in real time have been widely adopted in recent years. Although these are already extensively deployed in production systems, a systematic review of applications and implemented agent embeddings and architectures has not yet been conducted. The main contribution of this paper is to provide researchers and practitioners with an overview of applications and applied embeddings and to motivate further research in neural agent-based production. Findings indicate that neural agents are not only deployed in diverse applications, but are also increasingly implemented in multi-agent environments or in combination with conventional methods — leveraging performances compared to benchmarks and reducing dependence on human experience. This not only implies a more sophisticated focus on distributed production resources, but also broadening the perspective from a local to a global scale. Nevertheless, future research must further increase scalability and reproducibility to guarantee a simplified transfer of results to reality.
Enterprise systems have long played an important role in businesses of various sizes. With the increasing complexity of today’s business relationships, pecialized application systems are being used more and more. Moreover, emerging technologies such as artificial intelligence are becoming accessible for enterprise systems. This raises the question of the future role of enterprise systems. This minitrack covers novel ideas that contribute to and shape the future role of enterprise systems with five contributions.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Openness indicators for the evaluation of digital platforms between the launch and maturity phase
(2024)
In recent years, the evaluation of digital platforms has become an important focus in the field of information systems science. The identification of influential indicators that drive changes in digital platforms, specifically those related to openness, is still an unresolved issue. This paper addresses the challenge of identifying measurable indicators and characterizing the transition from launch to maturity in digital platforms. It proposes a systematic analytical approach to identify relevant openness indicators for evaluation purposes. The main contributions of this study are the following (1) the development of a comprehensive procedure for analyzing indicators, (2) the categorization of indicators as evaluation metrics within a multidimensional grid-box model, (3) the selection and evaluation of relevant indicators, (4) the identification and assessment of digital platform architectures during the launch-to-maturity transition, and (5) the evaluation of the applicability of the conceptualization and design process for digital platform evaluation.
Enterprise Resource Planning (ERP) system customization is often necessary because companies have unique processes that provide their competitive advantage. Despite new technological advances such as cloud computing or model-driven development, technical ERP customization options are either outdated or ambiguously formulated in the scientific literature. Using a systematic literature review (SLR) that analyzes 137 definitions from 26 papers, the result is an analysis and aggregation of technical customization types by providing clearance and aligning with future organizational needs. The results show a shift from ERP code modification in on-premises systems to interface and integration customization in cloud ERP systems, as well as emerging technological opportunities as a way for customers and key users to perform system customization. The study contributes by providing a clear understanding of given customization types and assisting ERP users and vendors in making customization decisions.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
Yes, we can (?)
(2021)
The COVID-19 crisis has caused an extreme situation for higher education institutions around the world, where exclusively virtual teaching and learning has become obligatory rather than an additional supporting feature. This has created opportunities to explore the potential and limitations of virtual learning formats. This paper presents four theses on virtual classroom teaching and learning that are discussed critically. We use existing theoretical insights extended by empirical evidence from a survey of more than 850 students on acceptance, expectations, and attitudes regarding the positive and negative aspects of virtual teaching. The survey responses were gathered from students at different universities during the first completely digital semester (Spring-Summer 2020) in Germany. We discuss similarities and differences between the subjects being studied and highlight the advantages and disadvantages of virtual teaching and learning. Against the background of existing theory and the gathered data, we emphasize the importance of social interaction, the combination of different learning formats, and thus context-sensitive hybrid learning as the learning form of the future.
Digitization and demographic change are enormous challenges for companies. Learning factories as innovative learning places can help prepare older employees for the digital change but must be designed and configured based on their specific learning requirements. To date, however, there are no particular recommendations to ensure effective age-appropriate training of bluecollar workers in learning factories. Therefore, based on a literature review, design characteristics and attributes of learning factories and learning requirements of older employees are presented. Furthermore, didactical recommendations for realizing age-appropriate learning designs in learning factories and a conceptualized scenario are outlined by synthesizing the findings.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep reinforcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensorand process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep rein- forcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensor- and process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Purpose
The purpose of this study was to investigate work-related adaptive performance from a longitudinal process perspective. This paper clustered specific behavioral patterns following the introduction of a change and related them to retentivity as an individual cognitive ability. In addition, this paper investigated whether the occurrence of adaptation errors varied depending on the type of change content.
Design/methodology/approach
Data from 35 participants collected in the simulated manufacturing environment of a Research and Application Center Industry 4.0 (RACI) were analyzed. The participants were required to learn and train a manufacturing process in the RACI and through an online training program. At a second measurement point in the RACI, specific manufacturing steps were subject to change and participants had to adapt their task execution. Adaptive performance was evaluated by counting the adaptation errors.
Findings
The participants showed one of the following behavioral patterns: (1) no adaptation errors, (2) few adaptation errors, (3) repeated adaptation errors regarding the same actions, or (4) many adaptation errors distributed over many different actions. The latter ones had a very low retentivity compared to the other groups. Most of the adaptation errors were made when new actions were added to the manufacturing process.
Originality/value
Our study adds empirical research on adaptive performance and its underlying processes. It contributes to a detailed understanding of different behaviors in change situations and derives implications for organizational change management.
Purpose
With shorter product cycles and a growing number of knowledge-intensive business processes, time consumption is a highly relevant target factor in measuring the performance of contemporary business processes. This research aims to extend prior research on the effects of knowledge transfer velocity at the individual level by considering the effect of complexity, stickiness, competencies, and further demographic factors on knowledge-intensive business processes at the conversion-specific levels.
Design/methodology/approach
We empirically assess the impact of situation-dependent knowledge transfer velocities on time consumption in teams and individuals. Further, we issue the demographic effect on this relationship. We study a sample of 178 experiments of project teams and individuals applying ordinary least squares (OLS) for regression analysis-based modeling.
Findings
The authors find that time consumed at knowledge transfers is negatively associated with the complexity of tasks. Moreover, competence among team members has a complementary effect on this relationship and stickiness retards knowledge transfers. Thus, while demographic factors urgently need to be considered for effective and speedy knowledge transfers, these influencing factors should be addressed on a conversion-specific basis so that some tasks are realized in teams best while others are not. Guidelines and interventions are derived to identify best task realization variants, so that process performance is improved by a new kind of process improvement method.
Research limitations/implications
This study establishes empirically the importance of conversion-specific influence factors and demographic factors as drivers of high knowledge transfer velocities in teams and among individuals. The contribution connects the field of knowledge management to important streams in the wider business literature: process improvement, management of knowledge resources, design of information systems, etc. Whereas the model is highly bound to the experiment tasks, it has high explanatory power and high generalizability to other contexts.
Practical implications
Team managers should take care to allow the optimal knowledge transfer situation within the team. This is particularly important when knowledge sharing is central, e.g. in product development and consulting processes. If this is not possible, interventions should be applied to the individual knowledge transfer situation to improve knowledge transfers among team members.
Social implications
Faster and more effective knowledge transfers improve the performance of both commercial and non-commercial organizations. As nowadays, the individual is faced with time pressure to finalize tasks, the deliberated increase of knowledge transfer velocity is a core capability to realize this goal. Quantitative knowledge transfer models result in more reliable predictions about the duration of knowledge transfers. These allow the target-oriented modification of knowledge transfer situations so that processes speed up, private firms are more competitive and public services are faster to citizens.
Originality/value
Time consumption is an increasingly relevant factor in contemporary business but so far not been explored in experiments at all. This study extends current knowledge by considering quantitative effects on knowledge velocity and improved knowledge transfers.