Refine
Has Fulltext
- no (46)
Year of publication
Document Type
- Article (46) (remove)
Language
- English (46) (remove)
Is part of the Bibliography
- yes (46)
Keywords
- Industry 4.0 (2)
- JSP (2)
- Simulation (2)
- learning factories (2)
- learning factory (2)
- AI and business informatics (1)
- AI-based decision support system (1)
- Analytics (1)
- Architecture concepts (1)
- COVID-19 (1)
- CPPS (1)
- CPS (1)
- Cyber-phyiscal system (1)
- Cyber-physical systems (1)
- Decentralized production control (1)
- Degree of autonomy (1)
- ERP system (1)
- Factory operating system (1)
- Hinweisreize (1)
- Internet of things (1)
- Open innovation (1)
- Production system (1)
- Produktions-Routine (1)
- TAM (1)
- Task realization strategies (1)
- action problems (1)
- age-appropriate competence development (1)
- assessment (1)
- benefits (1)
- big data analytics (1)
- cooperative AI (human-in-the-loop) (1)
- copyright (1)
- coring (1)
- creativity training (1)
- database (1)
- databases (1)
- decision-making (1)
- deep reinforcement learning (1)
- demographic change (1)
- development of AI-based systems (1)
- didactic concept (1)
- didactic framework (1)
- digital learning (1)
- digital marketplaces (1)
- digital platforms (1)
- discipline differences (1)
- discrete event simulation (1)
- e-learning (1)
- effectiveness (1)
- enhancement (1)
- enterprise system (1)
- experience; (1)
- future (1)
- game-based learning (1)
- gamification (1)
- humans-in-the-loop (1)
- improvement (1)
- intentional forgetting (1)
- intentionales Vergessen (1)
- intermediaries (1)
- internet of things and services (1)
- intervention (1)
- job shop scheduling (1)
- job-shop scheduling (1)
- knowledge transfer (1)
- learning environment (1)
- learning scenario for manufacturing (1)
- learning scenario implementation (1)
- manipulation (1)
- metadata (1)
- method comparision (1)
- mobile software ecosystems (1)
- modeling language (1)
- modular production (1)
- multi-agent system (1)
- music industry (1)
- notation (1)
- problems (1)
- process modelling (1)
- process-oriented knowledge acquisition (1)
- product generation engineering (1)
- production control (1)
- production routine (1)
- programming skills (1)
- quality (1)
- requirements (1)
- retrieval cues (1)
- retrofit (1)
- risks (1)
- serious game (1)
- smart grid (1)
- social network analysis (1)
- software engineering (1)
- standardization (1)
- subject-oriented learning (1)
- tacit knowledge (1)
- technology acceptance (1)
- technology-mediated teaching (1)
- terminology (1)
- training (1)
- university teaching (1)
- vocational training (1)
Institute
Collaborative Engineering is a promising concept to increase the competitiveness of companies. Target of this paper is to describe the industrial application of this approach, considering shipbuilding as an example. Besides the engineering partners needs to collaborate during the product development phase, there are many other stakeholders who are interested in the product ship along its whole life cycle. Therefore the Concept of Collaborative Engineering is extended by introducing the idea of Communities. Requirements on Communities in Engineering are deduced. Based on this an architectural framework for Collaborative Engineering Communities is described. Concluding research topics which have to be discussed for practical realization are outlined.
Skill management catalogues built via KMDL : integrating knowledge and business process modelling
(2004)
The efficient use of human capital is one of the most important factors in todays' business competition. Competition is strongly influenced by qualified staff. In order to aid the human resources department to keep up with strategic decisions various skill management systems have been created that make the development of human resources easier and more precise. Skill management systems are only as good as the information that they are based on. The mostly used basic information is the skill catalogue which shows the gaps of each employee or division within the company. But there are nearly no applicable methods yet to create such a catalogue thoroughly. This paper introduces a reasonable approach to create such a catalogue with the description language for knowledge-intensive processes KMDL. The skill catalogue built for skill management systems is one of the most important but still most neglected factors when introducing skill management.
Knowledge processes and business processes are linked together and should be regarded together, too. Business processes can be modeled and analyzed extensively with well known and established methods. The simple signs of static knowledge does not fulfill the requirements of a comprehensive and integrated approach of process-oriented knowledge management. The Knowledge Modeler Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. So KMDL can be used to formalize knowledge-intensive processes with a focus on certain knowledgespecific characteristics and to identify weak points in these processes. For computer-aided modeling and analyzing the tool K-Modeler is introduced.
Knowledge is more and more a key factor within companies. Nearly 40 percent of all employees are so called "knowledge workers". Distribution and inquest of knowledge within companies are supported by skill management systems. Although not all aspects and potentials of this instrument are yet utilized skill management systems have spread widely within business organizations. This paper summarizes the requirements, scopes and problems for skill management system within the company.
Business processes can be modelled and analysed extensively with well known and established methods. The simple signs of static knowledge do not fulfil the requirements of a comprehensive and integrated approach of process-oriented knowledge management. The Knowledge Modelling Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. Therefore KMDL can be used to formalise knowledge-intensive processes with a focus on certain knowledge-specific characteristics and to identify weak points in these processes. The tool K-Modeller is introduced for a computer-aided modelling and analysing.
Knowledge is more and more a key factor within companies [10]. Nearly 40 percent of all employees are so called 'knowledge workers'. Distribution and inquest of knowledge within companies are supported by skill management systems. Although not all aspects and potentials of this instrument are yet utilized skill management systems have spread widely within business organizations. This paper summarizes the requirements, scopes and problems for skill management system within the company.
This paper shows the KMDL Knowledge Management Approach which is based on the SECI and ba model by Nonaka and Takeuchi and the KMDL Knowledge modeling language. The approach illustrates the creation of knowledge with the focus on the knowledge conversions by Nonaka and Takeuchi. Furthermore, it emphasizes the quality of knowledge being embodied in persons and creates a personalization and socialization strategy which integrates business process modeling, skill management and the selection of knowledge management systems. The paper describes the theoretical foundations of the approach and practical effects which have been seen in the use of this approach.
The efficient use of human capital is one of the most important factors in todays' business competition. Competition is strongly influenced by qualified staff. In order to aid the human resources department to keep up with strategic decisions various competency management systems have been created that make the development of human resources easier and more precise. Competency management systems are only as good as the information that they are based on. The mostly used basic information is the skill catalogue. But there are nearly no applicable methods yet to create such a catalogue thoroughly. This paper introduces a reasonable approach to create such a catalogue with the description language for knowledge-intensive processes KMDL.
The Knowledge Modeler Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. So KMDL can be used to formalize knowledge-intensive processes with a focus on certain knowledge-specific characteristics and to identify weak points in these processes. For a computer-aided modeling and analyzing the tool K-Modeler is introduced.
Not only the public services are able to ensure the effective and efficient use of e-democracy tools. This contribution points out how a party must be structured to function as a neutral service provider for the citizen to set the results of electronic decision-making processes generally binding. The party provides only the methodology and the technology of decision making. Contents are defined exclusively from the citizens. These contents and voting results are implemented obligatorily in the parliament by the delegates of the party. The electronic democracy contributes, in order to supplement the representative democracy, scalable around direct democratic elements. The citizens can determine all 4 or 5 years with the national elections, how much each political decision has to be affected direct democratically by edemocracy tools. Such an approach is subject to other requirements than a governmental offered service.
Adaptability of information systems has become a substantial competition factor. Today's insufficient methodical support for the realization of adaptability frequently leads to unused potentials of deployed information technology in enterprises. In this contribution a procedure is presented, which addresses the demand to determine the necessary adaptability of an enterprise related to its surrounding environmental environment.
The concept of adaptability has been widely recognised as research field in recent years. Business information systems play a key part in terms of business performance. Adaptability of information systems therefore is a primary goal of vendors and end-users. However, so far concepts that help to determine the adaptability of Information Systems are missing. Based on research results of the project CHANGE1 this contribution presents an integrated process model addressing the problem and a possible solution.
Existing approaches in the area of knowledge-intensive processes focus on integrated knowledge and process management systems, the support of processes with KM systems, or the analysis of knowledge-intensive activities. For capturing knowledge-intensive business processes well known and established methods do not meet the requirements of a comprehensive and integrated approach of process-oriented knowledge management. These approaches are not able to visualise the decisions, actions and measures which are causing the sequence of the processes in an adequate manner. Parallel to conventional processes knowledge-intensive processes exist. These processes are based on conversions of knowledge within these processes. To fill these gaps in modelling knowledge-intensive business processes the Knowledge Modelling and Description Language (KMDL) got developed. The KMDL is able to represent the development, use, offer and demand of knowledge along business processes. Further it is possible to show the existing knowledge conversions which take place additionally to the normal business processes. The KMDL can be used to formalise knowledgeintensive processes with a focus on certain knowledge-specific characteristics and to identify process improvements in these processes. The KMDL modelling tool K-Modeler is introduced for a computer-aided modelling and analysing. The technical framework and the most important functionalities to support the analysis of the captured processes are introduced in the following contribution.
Process oriented knowledge management focuses on knowledge intensive business processes. For modelling and analysis of these processes the modelling technique KMDL (Knowledge Modeling and Description Language) has been developed. KMDL is a method to describe knowledge flows and conversions along and between business processes. Thereby KMDL identifies existing and utilized information as well as knowledge of individual participants and of the entire company. This research-in-progress contribution introduces a practical example in the field of software engineering, in which KMDL models are evaluated to identify process improvements, e.g. by adding knowledge management activities. Therefore three individual views focussing on selected aspects of interest are introduced.
Skill Management
(2006)
Requirements for an integration of methods analyzing social issues in knowledge organizations
(2006)
This contribution presents an approach for requirement oriented team building in industrial processes like product development. This will be based on the knowledge modelling and description language (KMDL(R)) that enables the modelling and analysis of knowledge intensive business processes. First the basic elements of the modelling technique are described, presenting the concept and the description language. Furthermore it is shown how the KMDL(R) process models can be used as a basis for the team building component. Therefore, an algorithm was developed that is able to propose a team composition for a specific task by analyzing the knowledge and skills of the employees, which will be contrasted to the process requirements. This can be used as guidance for team building decisions.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Existing factories face multiple problems due to their hierarchical structure of decision making and control. Cyber-physical systems principally allow to increase the degree of autonomy to new heights. But which degree of autonomy is really useful and beneficiary? This paper differentiates diverse definitions of autonomy and approaches to determine them. Some experimental findings in a lab environment help to answer the question raised in this paper.
Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment (N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
The usage of gamification in the contexts of commerce, consumption, innovation or eLearning in schools and universities has been extensively researched. However, the potentials of serious games to transfer and perpetuate knowledge and action patterns in learning factories have not been levered so far. The goal of this paper is to introduce a serious game as an instrument for knowledge transfer and perpetuation. Therefore, reqirements towards serious games in the context of learning factories are pointed out. As a result, that builds on these requirements, a serious learning game for the topic of Industry 4.0 is practically designed and evaluated.
Technological advancements are giving rise to the fourth industrial revolution - Industry 4.0 -characterized by the mass employment of smart objects in highly reconfigurable and thoroughly connected industrialproduct-service systems. The purpose of this paper is to propose a theory-based knowledgedynamics model in the smart grid scenario that would provide a holistic view on the knowledge-based interactions among smart objects, humans, and other actors as an underlyingmechanism of value co-creation in Industry 4.0. A multi-loop and three-layer - physical, virtual, and interface - model of knowledge dynamics is developedby building on the concept of ba - an enabling space for interactions and theemergence of knowledge. The model depicts how big data analytics are just one component inunlocking the value of big data, whereas the tacit engagement of humans-in-the-loop - theirsense-making and decision-making - is needed for insights to be evoked fromanalytics reports and customer needs to be met.
Cyber-physical systems (CPS) have shaped the discussion about Industry 4.0 (I4.0) for some time. To ensure the competitiveness of manufacturing enterprises the vision for the future figures out cyber-physical production systems (CPPS) as a core component of a modern factory. Adaptability and coping with complexity are (among others) potentials of this new generation of production management. The successful transformation of this theoretical construct into practical implementation can only take place with regard to the conditions characterizing the context of a factory. The subject of this contribution is a concept that takes up the brownfield character and describes a solution for extending existing (legacy) systems with CPS capabilities.
The implementation of learning scenarios is a diversely challenging, frequently purely manual and effortful undertaking. In this contribution a process based view is used in scenario generation to overcome communication, coordination and technical gaps. A framework is provided to identify, define and integrate technological artefacts and learning content as modular, reusable building blocks along a modeled production process. The specific contribution is twofold: 1) the theoretical framework represents a unique basis for modularization of content and technology in order to enhance reusability, 2) the model based scenario definition is a starting point for automated implementation of learning scenarios in industrial learning environments that has not been created before.
Digitization and demographic change are enormous challenges for companies. Learning factories as innovative learning places can help prepare older employees for the digital change but must be designed and configured based on their specific learning requirements. To date, however, there are no particular recommendations to ensure effective age-appropriate training of bluecollar workers in learning factories. Therefore, based on a literature review, design characteristics and attributes of learning factories and learning requirements of older employees are presented. Furthermore, didactical recommendations for realizing age-appropriate learning designs in learning factories and a conceptualized scenario are outlined by synthesizing the findings.
Today’s mobile devices are part of powerful business ecosystems, which usually involve digital platforms. To better understand the complex phenomenon of coring and related dynamics, this paper presents a case study comparing iMessage as part of Apple’s iOS and WhatsApp. Specifically, it investigates activities regarding platform coring, as the integration of several functionalities provided by third-party applications in the platform core. The paper makes three contributions. First, a systematization of coring activities is developed. Coring modes are differentiated by the amount of coring and application maintenance. Second, the case study revealed that the phenomenon of platform coring is present on digital platforms for mobile devices. Third, the fundamentals of coring are discussed as a first step towards theoretical development. Even though coring constitutes a potential threat for third-party developers regarding their functional differentiation, an idea of what a beneficial partnership incorporating coring activities could look like is developed here.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
In the copyright industries of the 21st century, metadata is the grease required to make the engine of copyright run smoothly and powerfully for the benefit of creators, copyright industries and users alike. However, metadata is difficult to acquire and even more difficult to keep up to date as the rights in content are mostly multi-layered, fragmented, international and volatile. This article explores the idea of a neutral metadata search and enhancement tool that could constitute a buffer to safeguard the interests of the various proprietary database owners and avoid the shortcomings of centralised databases.
The increasing demand for software engineers cannot completely be fulfilled by university education and conventional training approaches due to limited capacities. Accordingly, an alternative approach is necessary where potential software engineers are being educated in software engineering skills using new methods. We suggest micro tasks combined with theoretical lessons to overcome existing skill deficits and acquire fast trainable capabilities. This paper addresses the gap between demand and supply of software engineers by introducing an actionoriented and scenario-based didactical approach, which enables non-computer scientists to code. Therein, the learning content is provided in small tasks and embedded in learning factory scenarios. Therefore, different requirements for software engineers from the market side and from an academic viewpoint are analyzed and synthesized into an integrated, yet condensed skills catalogue. This enables the development of training and education units that focus on the most important skills demanded on the market. To achieve this objective, individual learning scenarios are developed. Of course, proper basic skills in coding cannot be learned over night but software programming is also no sorcery.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
The digital transformation sets new requirements to all classes of enterprise systems in companies. ERP systems in particular, which represent the dominant class of enterprise systems, are struggling to meet the new requirements at all levels of the architecture. Therefore, there is an urgent need to reconsider the overall architecture of the systems and address the root of the related issues. Given that many restrictions ERP pose on their adaptability are related to the standardization of data, the database layer of ERP systems is addressed. Since database serve as the foundation for data storage and retrieval, they limit the flexibility of enterprise systems and the chance to adapt to new requirements accordingly. So far, relational databases are widely used. Using a systematic literature approach, recent requirements for ERP systems were identified. Prominent database approaches were assessed against the 23 requirements identified. The results reveal the strengths and weaknesses of recent database approaches. To this end, the results highlight the demand to combine multiple database approaches to fulfill recent business requirements. From a conceptual point of view, this paper supports the idea of federated databases which are interoperable to fulfill future requirements and support business operation. This research forms the basis for renewal of the current generation of ERP systems and proposes to ERP vendors to use different database concepts in the future.
This paper presents an exploratory study investigating the influence of the factors (1) intermediary participation, (2) decision-making authority, (3) position in the enterprise, and (4) experience in open innovation on the perception and assessment of the benefits and risks expected from participating in open innovation projects. For this purpose, an online survey was conducted in Germany, Austria and Switzerland. The result of this paper is an empirical evidence showing whether and how these factors affect the perception of potential benefits and risks expected within the context of open innovation project participation. Furthermore, the identified effects are discussed against the theory. Existing theory regarding the benefits and risks of open innovation is expanded by (1) finding that they are perceived mostly independently of the factors, (2) confirming the practical relevance of benefits and risks, and (3) enabling a finer distinction between their degrees of relevance according to respective contextual specifics.
Terminology is a critical instrument for each researcher. Different terminologies for the same research object may arise in different research communities. By this inconsistency, many synergistic effects get lost. Theories and models will be more understandable and reusable if a common terminology is applied. This paper examines the terminological (in)consistence for the research field of job-shop scheduling by a literature review. There is an enormous variety in the choice of terms and mathematical notation for the same concept. The comparability, reusability and combinability of scheduling methods is unnecessarily hampered by the arbitrary use of homonyms and synonyms. The acceptance in the community of used variables and notation forms is shown by means of a compliance quotient. This is proven by the evaluation of 240 scientific publications on planning methods.
Competence development must change at all didactic levels to meet the new requirements triggered by digitization. Unlike classic learning theories and the resulting popular approaches (e.g., sender-receiver model), future-oriented vocational training must include new learning theory impulses in the discussion about competence acquisition. On the one hand, these impulses are often very well elaborated on the theoretical side, but the transfer into innovative learning environments - such as learning factories - is often still missing. On the other hand, actual learning factory (design) approaches often concentrate primarily on the technical side. Subject-oriented learning theory enables the design of competence development-oriented vocational training projectsin learning factories in which persons can obtain relevant competencies for digitization. At the same time, such learning theory approaches assume a potentially infinite number of learning interests and reasons. Following this, competence development is always located in an institutional or organizational context. The paper conceptionally answers how this theoryimmanent challenge is synthesizable with the reality of organizationally competence development requirements.
This meta-analysis synthesizes 332 effect sizes of various methods to enhance creativity. We clustered all studies into 12 methods to identify the most effective creativity enhancement methods. We found that, on average, creativity can be enhanced, Hedges’ g = 0.53, 95% CI [0.44, 0.61], with 70.09% of the participants in the enhancement conditions being more creative than the average person in the control conditions. Complex training courses, meditation, and cultural exposure were the most effective (gs = 0.66) while the use of cognitive manipulation drugs was the least and also noneffective, g = 0.10. The type of training material was also important. For instance, figural methods were more effective in enhancing creativity, and enhancing converging thinking was more effective than enhancing divergent thinking. Study effect sizes varied considerably across all studies and for many subgroup analyses, suggesting that researchers can plausibly expect to find reversed effects occasionally. We found no evidence of publication bias. We discuss theoretical implications and suggest future directions for best practices in enhancing creativity. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.