Refine
Has Fulltext
- no (145)
Year of publication
Document Type
- Article (145) (remove)
Keywords
- learning factory (3)
- Hinweisreize (2)
- Industry 4.0 (2)
- JSP (2)
- Simulation (2)
- assessment (2)
- deep reinforcement learning (2)
- learning factories (2)
- modular production (2)
- multi-agent system (2)
Institute
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Lernen mit Assistenzsystemen
(2020)
Der Beitrag beschreibt die Konzeption und Durchführung und bietet einen Einblick in die ersten Ergebnisse einer Untersuchung mit experimentellem Design in einer simulierten Prozessumgebung im Forschungs- und Anwendungszentrum Industrie 4.0 in Potsdam. Im Mittelpunkt stehen Anlernprozesse im Bereich der Einfacharbeit (Helfertätigkeiten) und ihre Gestaltung durch den Einsatz digitaler Assistenzsysteme. In der Arbeitsforschung finden sich Hinweise darauf, dass mit dem Einsatz dieser Systeme Prozesswissen verloren geht, im Sinne einer guten Kenntnis des gesamten Arbeitsprozesses, in den die einzelnen Tätigkeiten eingebettet sind. Das kann sich als Problem erweisen, vor allem wenn unvorhersehbare Situationen oder Fehler eintreten. Um die Rolle von Prozesswissen beim Einsatz von digitalen Assistenzsystemen zu untersuchen, wird im Experiment eine echte Fabriksituation simuliert. Die Probanden werden über ein Assistenzsystem Schritt für Schritt in ihre Aufgabentätigkeit angelernt, einem Teil der Probanden wird allerdings am Anfang zusätzlich Prozesswissen im Rahmen einer kurzen Schulung vermittelt.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
Assistenzsysteme finden im Kontext der digitalen Transformation immer mehr Einsatz. Sie können Beschäftigte in industriellen Produktionsprozessen sowohl in der Anlern- als auch in der aktiven Arbeitsphase unterstützen. Kompetenzen können so arbeitsplatz- und prozessnah sowie bedarfsorientiert aufgebaut werden. In diesem Beitrag wird der aktuelle Forschungsstand zu den Einsatzmöglichkeiten dieser Assistenzsysteme diskutiert und mit Beispielen illustriert. Es werden unter anderem auch Herausforderungen für den Einsatz aufgezeigt. Am Ende des Beitrags werden Potenziale für die zukünftige Nutzung von AS in industriellen Lernprozessen und für die Forschung identifiziert.
Die Herstellung von Produkten bindet Energie sowie auch materielle Ressourcen. Viel zu langsam entwickeln sich sowohl das Bewusstsein der Konsumenten sowie der Produzenten als auch gesetzgebende Aktivitäten, um zu einem nachhaltigen Umgang mit den zur Verfügung stehenden Ressourcen zu gelangen. In diesem Beitrag wird ein lokaler Remanufacturing-Ansatz vorgestellt, der es ermöglicht, den Ressourcenverbrauch zu reduzieren, lokale Unternehmen zu fördern und effiziente Lösungen für die regionale Wieder- und Weiterverwendung von Gütern anzubieten.
This paper presents an exploratory study investigating the influence of the factors (1) intermediary participation, (2) decision-making authority, (3) position in the enterprise, and (4) experience in open innovation on the perception and assessment of the benefits and risks expected from participating in open innovation projects. For this purpose, an online survey was conducted in Germany, Austria and Switzerland. The result of this paper is an empirical evidence showing whether and how these factors affect the perception of potential benefits and risks expected within the context of open innovation project participation. Furthermore, the identified effects are discussed against the theory. Existing theory regarding the benefits and risks of open innovation is expanded by (1) finding that they are perceived mostly independently of the factors, (2) confirming the practical relevance of benefits and risks, and (3) enabling a finer distinction between their degrees of relevance according to respective contextual specifics.
The increasing demand for software engineers cannot completely be fulfilled by university education and conventional training approaches due to limited capacities. Accordingly, an alternative approach is necessary where potential software engineers are being educated in software engineering skills using new methods. We suggest micro tasks combined with theoretical lessons to overcome existing skill deficits and acquire fast trainable capabilities. This paper addresses the gap between demand and supply of software engineers by introducing an actionoriented and scenario-based didactical approach, which enables non-computer scientists to code. Therein, the learning content is provided in small tasks and embedded in learning factory scenarios. Therefore, different requirements for software engineers from the market side and from an academic viewpoint are analyzed and synthesized into an integrated, yet condensed skills catalogue. This enables the development of training and education units that focus on the most important skills demanded on the market. To achieve this objective, individual learning scenarios are developed. Of course, proper basic skills in coding cannot be learned over night but software programming is also no sorcery.
Die teilweise sehr kurzfristig notwendige Reaktion auf Veränderungen erfordert von Unternehmen ein hohes Maß an Flexibilität und Reaktionsgeschwindigkeit. Anwendungssystemarchitekturen, die im Wesentlichen aus alten und selbst entwickelten Systemen bestehen, erfüllen häufig diese Anforderungen nicht. Investitionsmittel für neue Software sind jedoch begrenzt, daher müssen Prioritäten in der Ablösung von Altsystemen gesetzt werden. Eine effiziente Analysemethode zur Planung der Erneuerung der Anwendungssystemlandschaft stellt die Wandlungsfähigkeitsanalyse dar. Dieser Beitrag beschreibt Vorgehen und Ergebnisse am Beispiel eines international tätigen Automobilzulieferers.
The implementation of learning scenarios is a diversely challenging, frequently purely manual and effortful undertaking. In this contribution a process based view is used in scenario generation to overcome communication, coordination and technical gaps. A framework is provided to identify, define and integrate technological artefacts and learning content as modular, reusable building blocks along a modeled production process. The specific contribution is twofold: 1) the theoretical framework represents a unique basis for modularization of content and technology in order to enhance reusability, 2) the model based scenario definition is a starting point for automated implementation of learning scenarios in industrial learning environments that has not been created before.
Die Innovationstätigkeit im industriellen Umfeld verlagert sich durch die Digitalisierung hin zu Produkt-Service-Systemen. Kleine und mittlere Unternehmen haben sich in ihrer Entwicklungstätigkeit bisher stark auf die Produktentwicklung bezogen. Der Umstieg auf „smarte“ Produkte und die Kopplung an Dienstleistungen erfordert häufig personelle und finanzielle Ressourcen, welche KMU nicht aufbringen können. Crowdsourcing stellt eine Möglichkeit dar, den Innovationsprozess für externe Akteure zu öffnen und Kosten- sowie Geschwindigkeitsvorteile zu realisieren. Bei der Integration von Crowdsourcing-Elementen ist jedoch einigen Herausforderungen zu begegnen. Dieser Beitrag zeigt sowohl die Potenziale als auch die Barrieren einer Crowdsourcing-Nutzung im industriellen Umfeld auf.
Competence development must change at all didactic levels to meet the new requirements triggered by digitization. Unlike classic learning theories and the resulting popular approaches (e.g., sender-receiver model), future-oriented vocational training must include new learning theory impulses in the discussion about competence acquisition. On the one hand, these impulses are often very well elaborated on the theoretical side, but the transfer into innovative learning environments - such as learning factories - is often still missing. On the other hand, actual learning factory (design) approaches often concentrate primarily on the technical side. Subject-oriented learning theory enables the design of competence development-oriented vocational training projectsin learning factories in which persons can obtain relevant competencies for digitization. At the same time, such learning theory approaches assume a potentially infinite number of learning interests and reasons. Following this, competence development is always located in an institutional or organizational context. The paper conceptionally answers how this theoryimmanent challenge is synthesizable with the reality of organizationally competence development requirements.
Die Digitalisierung von Produktionsprozessen schreitet mit einer hohen Intensität voran. Weiterbildung hat eine hohe Relevanz für betriebliche Transformationsprozesse. Die betriebliche Weiterbildungspraxis ist den aktuellen Herausforderungen der Digitalisierung jedoch nicht gewachsen. Herausforderungen sind Kompetenzlücken der Mitarbeiter, ungewisse Anforderungsprofile und Tätigkeitstypen, demographischer Wandel sowie veraltete didaktische Ansätze. Zudem wird bestehender inhaltlicher und pädagogischer Freiraum bei der Gestaltung von Weiterbildung oftmals nur unzureichend ausgenutzt. Die skizzierte Situation führt dazu, dass der Mehrwert gegenwärtiger Qualifizierungsangebote sowohl für Unternehmen als auch Beschäftigte nicht ausgeschöpft wird. Ausgehend von Veränderungen durch Digitalisierung in der Produktion und deren Auswirkungen auf die Kompetenzentwicklung diskutiert dieser Beitrag Herausforderungen gegenwärtiger betrieblicher Weiterbildung. Er leitet Handlungsempfehlungen ab, die mithilfe von Beispielen gewerkschaftlich unterstützter Weiterbildungspraxis illustriert werden. Im Ergebnis erhalten Interessierte einen Überblick über gegenwärtige Herausforderungen und Handlungsempfehlungen für die Gestaltung und Durchführung von Weiterbildung in Zeiten der Digitalisierung.
Digitization and demographic change are enormous challenges for companies. Learning factories as innovative learning places can help prepare older employees for the digital change but must be designed and configured based on their specific learning requirements. To date, however, there are no particular recommendations to ensure effective age-appropriate training of bluecollar workers in learning factories. Therefore, based on a literature review, design characteristics and attributes of learning factories and learning requirements of older employees are presented. Furthermore, didactical recommendations for realizing age-appropriate learning designs in learning factories and a conceptualized scenario are outlined by synthesizing the findings.
The usage of gamification in the contexts of commerce, consumption, innovation or eLearning in schools and universities has been extensively researched. However, the potentials of serious games to transfer and perpetuate knowledge and action patterns in learning factories have not been levered so far. The goal of this paper is to introduce a serious game as an instrument for knowledge transfer and perpetuation. Therefore, reqirements towards serious games in the context of learning factories are pointed out. As a result, that builds on these requirements, a serious learning game for the topic of Industry 4.0 is practically designed and evaluated.
Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment (N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
Willentliches Vergessen
(2019)
Dieser Beitrag im Journal Gruppe. Interaktion. Organisation. stellt dar, wie willentliches Vergessen die Anpassung an notwendige Veränderungen für Individuen, Gruppen und Organisationen verbessert und wie willentliches Vergessen bewusst und gezielt gestaltet werden kann.
Damit Verhalten in Folge einer notwendigen Veränderung angepasst wird, reicht es nicht aus, dass Menschen wissen was zu tun ist, willens und in der Lage sind ihr Verhalten zu verändern. Eine Veränderung gelingt nur dann, wenn nur noch das neue Verhalten zur Anwendung kommt und nicht mehr das Alte, wenn das alte Verhalten vergessen wird. Der notwendige Prozess des willentlichen Vergessens ist durch Entfernen von Hinweisreizen, die die Erinnerung des zu Vergessenden und durch Platzierung von Hinweisreizen, die die Aktivierung des Neuen auslösen, gestaltbar.
Der vorliegende Beitrag stellt die förderliche Wirkung von Hinweisreizen auf willentliches Vergessen dar, stellt sie im Rahmen des Berichts einer experimentellen Studie unter Beweis und gibt praktische Implikationen, wie für Individuen, Gruppen und Organisationen willentliches Vergessen gestaltet werden kann.
Die Autoren stellen einen Ansatz vor, mit dem das Erfahrungswissen über die Produktionsregelung mit künstlichen neuronalen Netzen so aufgearbeitet und strukturiert werden kann, dass mittels des fallbasierten Schließens für neue Produktionssituationen die geeigneten und entsprechend voreingestellten neuronalen Netze ausgewählt werden können. Aufbauend auf den so gewonnenen Ergebnissen wird ein fallbasiertes System entwickelt.
Im Beitrag wird dargestellt, wie der Einsatz von Wissensmanagementwerkzeugen einen wichtigen Beitrag zur Gestaltung eines erfolgreichen ERP-Betriebes leisten kann. Da diese Aufgabe einen hohen Anteil an wissensintensiven Prozessen aufweist, wird dazu eine Methode benötigt, welches die Prozesse strukturiert erfasst, durch die Modellierung analysiert und geeignete Maßnahmen des Wissensmanagements vorschlägt. Die Methode KMDL® ermöglicht durch die Einführung einer Wissensebene die Spezifikation, wann Wissen welchen Inhalts und in welcher Form im Prozess benötigt oder erzeugt wird. Dabei wird auch darauf eingegangen, wie durch die Anwendung der KMDL® der optimale Schulungsbedarf für ERP-Anwender ermittelt werden kann.
Die Bedeutung der Zulieferer für die Automobilhersteller wächst stetig, weil die Zulieferer immer stärker in den gesamten Wertschöpfungsprozess des Herstellers eingebunden werden und somit ständig Planung, Qualität und Logistikabläufe optimiert werden müssen. Betriebliche Anwendungen, wie Enterprise Resource Planning- (ERP-) oder Produktionsplanung- und -steuerungs- (PPS-) Systeme werden benötigt, um den störungsfreien und reibungslosen Ablauf der Geschäftsprozesse und damit die ständige Lieferfähigkeit gegenüber dem Automobilhersteller zu garantieren [1]. Anhand eines Marktüberblicks werden in diesem Beitrag innovative Ansätze, Möglichkeiten und Koordinationsmechanismen zur Unterstützung der Produktion in verteilten Standorten von aktuellen ERP-/PPS-Systemen vorgestellt.
Terminology is a critical instrument for each researcher. Different terminologies for the same research object may arise in different research communities. By this inconsistency, many synergistic effects get lost. Theories and models will be more understandable and reusable if a common terminology is applied. This paper examines the terminological (in)consistence for the research field of job-shop scheduling by a literature review. There is an enormous variety in the choice of terms and mathematical notation for the same concept. The comparability, reusability and combinability of scheduling methods is unnecessarily hampered by the arbitrary use of homonyms and synonyms. The acceptance in the community of used variables and notation forms is shown by means of a compliance quotient. This is proven by the evaluation of 240 scientific publications on planning methods.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
Nowadays, production planning and control must cope with mass customization, increased fluctuations in demand, and high competition pressures. Despite prevailing market risks, planning accuracy and increased adaptability in the event of disruptions or failures must be ensured, while simultaneously optimizing key process indicators. To manage that complex task, neural networks that can process large quantities of high-dimensional data in real time have been widely adopted in recent years. Although these are already extensively deployed in production systems, a systematic review of applications and implemented agent embeddings and architectures has not yet been conducted. The main contribution of this paper is to provide researchers and practitioners with an overview of applications and applied embeddings and to motivate further research in neural agent-based production. Findings indicate that neural agents are not only deployed in diverse applications, but are also increasingly implemented in multi-agent environments or in combination with conventional methods — leveraging performances compared to benchmarks and reducing dependence on human experience. This not only implies a more sophisticated focus on distributed production resources, but also broadening the perspective from a local to a global scale. Nevertheless, future research must further increase scalability and reproducibility to guarantee a simplified transfer of results to reality.
Process oriented knowledge management focuses on knowledge intensive business processes. For modelling and analysis of these processes the modelling technique KMDL (Knowledge Modeling and Description Language) has been developed. KMDL is a method to describe knowledge flows and conversions along and between business processes. Thereby KMDL identifies existing and utilized information as well as knowledge of individual participants and of the entire company. This research-in-progress contribution introduces a practical example in the field of software engineering, in which KMDL models are evaluated to identify process improvements, e.g. by adding knowledge management activities. Therefore three individual views focussing on selected aspects of interest are introduced.
Requirements for an integration of methods analyzing social issues in knowledge organizations
(2006)
CO₂-Fußabdrücke sind ein aktuell viel diskutiertes Thema mit weitreichenden Implikationen für Individuen als auch Unternehmen. Firmen können einen proaktiven Beitrag zur Transparenz leisten, indem der unternehmens- oder produktbezogene CO₂-Fußabdruck ausgewiesen wird. Ist der Entschluss gefasst einen CO₂-Fußabdruck auszuweisen und die entstehenden Treibhausgase zu erfassen, existiert eine Vielzahl unterschiedlicher Normen und Zertifikate, wie die publicly available specification 2050, das Greenhouse Gas Protokoll oder die ISO 14067. Das Ziel dieses Beitrags ist es, diese drei Normen zur Berechnung des produktbezogenen CO₂-Fußabdrucks zu vergleichen, um Gemeinsamkeiten und Unterschiede sowie Vor- und Nachteile in der Anwendung aufzuzeigen. Die Übersicht soll Unternehmen bei der Entscheidungsfindung hinsichtlich der Eignung eines CO₂-Fußabdrucks für ihr Unternehmen unterstützen.
Cyber-physical systems (CPS) have shaped the discussion about Industry 4.0 (I4.0) for some time. To ensure the competitiveness of manufacturing enterprises the vision for the future figures out cyber-physical production systems (CPPS) as a core component of a modern factory. Adaptability and coping with complexity are (among others) potentials of this new generation of production management. The successful transformation of this theoretical construct into practical implementation can only take place with regard to the conditions characterizing the context of a factory. The subject of this contribution is a concept that takes up the brownfield character and describes a solution for extending existing (legacy) systems with CPS capabilities.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
This meta-analysis synthesizes 332 effect sizes of various methods to enhance creativity. We clustered all studies into 12 methods to identify the most effective creativity enhancement methods. We found that, on average, creativity can be enhanced, Hedges’ g = 0.53, 95% CI [0.44, 0.61], with 70.09% of the participants in the enhancement conditions being more creative than the average person in the control conditions. Complex training courses, meditation, and cultural exposure were the most effective (gs = 0.66) while the use of cognitive manipulation drugs was the least and also noneffective, g = 0.10. The type of training material was also important. For instance, figural methods were more effective in enhancing creativity, and enhancing converging thinking was more effective than enhancing divergent thinking. Study effect sizes varied considerably across all studies and for many subgroup analyses, suggesting that researchers can plausibly expect to find reversed effects occasionally. We found no evidence of publication bias. We discuss theoretical implications and suggest future directions for best practices in enhancing creativity. (PsycInfo Database Record (c) 2023 APA, all rights reserved)
Künstliche Intelligenz ist in aller Munde. Immer mehr Anwendungsbereiche werden durch die Auswertung von vorliegenden Daten mit Algorithmen und Frameworks z.B. des Maschinellen Lernens erschlossen. Dieses Buch hat das Ziel, einen Überblick über gegenwärtig vorhandene Lösungen zu geben und darüber hinaus konkrete Hilfestellung bei der Auswahl von Algorithmen oder Tools bei spezifischen Problemstellungen zu bieten. Um diesem Anspruch gerecht zu werden, wurden 90 Lösungen mittels einer systematischen Literaturrecherche und Praxissuche identifiziert sowie anschließend klassifiziert. Mit Hilfe dieses Buches gelingt es, schnell die notwendigen Grundlagen zu verstehen, gängige Anwendungsgebiete zu identifizieren und den Prozess zur Auswahl eines passenden ML-Tools für das eigene Projekt systematisch zu meistern.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Faced with the increasing needs of companies, optimal dimensioning of IT hardware is becoming challenging for decision makers. In terms of analytical infrastructures, a highly evolutionary environment causes volatile, time dependent workloads in its components, and intelligent, flexible task distribution between local systems and cloud services is attractive. With the aim of developing a flexible and efficient design for analytical infrastructures, this paper proposes a flexible architecture model, which allocates tasks following a machine-specific decision heuristic. A simulation benchmarks this system with existing strategies and identifies the new decision maxim as superior in a first scenario-based simulation.
Not only the public services are able to ensure the effective and efficient use of e-democracy tools. This contribution points out how a party must be structured to function as a neutral service provider for the citizen to set the results of electronic decision-making processes generally binding. The party provides only the methodology and the technology of decision making. Contents are defined exclusively from the citizens. These contents and voting results are implemented obligatorily in the parliament by the delegates of the party. The electronic democracy contributes, in order to supplement the representative democracy, scalable around direct democratic elements. The citizens can determine all 4 or 5 years with the national elections, how much each political decision has to be affected direct democratically by edemocracy tools. Such an approach is subject to other requirements than a governmental offered service.
Knowledge processes and business processes are linked together and should be regarded together, too. Business processes can be modeled and analyzed extensively with well known and established methods. The simple signs of static knowledge does not fulfill the requirements of a comprehensive and integrated approach of process-oriented knowledge management. The Knowledge Modeler Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. So KMDL can be used to formalize knowledge-intensive processes with a focus on certain knowledgespecific characteristics and to identify weak points in these processes. For computer-aided modeling and analyzing the tool K-Modeler is introduced.
Business processes can be modelled and analysed extensively with well known and established methods. The simple signs of static knowledge do not fulfil the requirements of a comprehensive and integrated approach of process-oriented knowledge management. The Knowledge Modelling Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. Therefore KMDL can be used to formalise knowledge-intensive processes with a focus on certain knowledge-specific characteristics and to identify weak points in these processes. The tool K-Modeller is introduced for a computer-aided modelling and analysing.
The Knowledge Modeler Description Language KMDL is able to represent the creation, use and necessity of knowledge along common business processes. So KMDL can be used to formalize knowledge-intensive processes with a focus on certain knowledge-specific characteristics and to identify weak points in these processes. For a computer-aided modeling and analyzing the tool K-Modeler is introduced.
KMDL® v2.2
(2014)
Skill management catalogues built via KMDL : integrating knowledge and business process modelling
(2004)
The efficient use of human capital is one of the most important factors in todays' business competition. Competition is strongly influenced by qualified staff. In order to aid the human resources department to keep up with strategic decisions various skill management systems have been created that make the development of human resources easier and more precise. Skill management systems are only as good as the information that they are based on. The mostly used basic information is the skill catalogue which shows the gaps of each employee or division within the company. But there are nearly no applicable methods yet to create such a catalogue thoroughly. This paper introduces a reasonable approach to create such a catalogue with the description language for knowledge-intensive processes KMDL. The skill catalogue built for skill management systems is one of the most important but still most neglected factors when introducing skill management.
Knowledge is more and more a key factor within companies. Nearly 40 percent of all employees are so called "knowledge workers". Distribution and inquest of knowledge within companies are supported by skill management systems. Although not all aspects and potentials of this instrument are yet utilized skill management systems have spread widely within business organizations. This paper summarizes the requirements, scopes and problems for skill management system within the company.
Knowledge is more and more a key factor within companies [10]. Nearly 40 percent of all employees are so called 'knowledge workers'. Distribution and inquest of knowledge within companies are supported by skill management systems. Although not all aspects and potentials of this instrument are yet utilized skill management systems have spread widely within business organizations. This paper summarizes the requirements, scopes and problems for skill management system within the company.