Refine
Has Fulltext
- no (62)
Year of publication
Document Type
- Conference Proceeding (62) (remove)
Is part of the Bibliography
- yes (62)
Keywords
- E-Mail Tracking (3)
- Privacy (3)
- enterprise systems (3)
- knowledge management (3)
- knowledge transfer (3)
- social media (3)
- COVID-19 (2)
- CPS (2)
- ERP (2)
- Internet of Things (2)
Institute
- Fachgruppe Betriebswirtschaftslehre (62) (remove)
Many prediction tasks can be done based on users’ trace data. This paper explores divergent and convergent thinking as person-related attributes and predicts them based on features gathered in an online course. We use the logfile data of a short Moodle course, combined with an image test (IMT), the Alternate Uses Task (AUT), the Remote Associates Test (RAT), and creative self-efficacy (CSE). Our results show that originality and elaboration metrics can be predicted with an accuracy of ~.7 in cross-validation, whereby predicting fluency and RAT scores perform worst. CSE items can be predicted with an accuracy of ~.45. The best performing model is a Random Forest Tree, where the features were reduced using a Linear Discriminant Analysis in advance. The promising results can help to adjust online courses to the learners’ needs based on their creative performances.
Looking for participation
(2022)
A stronger learner orientation through participatory learning increases learning motivation and results. But what does participatory learning mean? Where do learning factories and fabrication laboratories (FabLabs) stand in this context, and how can didactic implementation be improved in this respect? Using a newly developed analytical framework, which contains elements of the stage model of participation and general media didactics, we compare a FabLab and a learning factory example concerning the degree of participation. From this, we derive guidelines for designing participative teaching and learning processes in learning factories. We explain how FabLabs can be an inspiration for the didactic design of learning factories.
During a crisis event, social media enables two-way communication and many-to-many information broadcasting, browsing others’ posts, publishing own content, and public commenting. These records can deliver valuable insights to approach problematic situations effectively. Our study explores how social media communication can be analyzed to understand the responses to health crises better. Results based on nearly 800 K tweets indicate that the coping and regulation foci framework holds good explanatory power, with four clusters salient in public reactions: 1) “Understanding” (problem-promotion); 2) “Action planning” (problem-prevention); 3) “Hope” (emotion-promotion) and 4) “Reassurance” (emotion-prevention). Second, the inter-temporal analysis shows high volatility of topic proportions and a shift from self-centered to community-centered topics during the course of the event. The insights are beneficial for research on crisis management and practicians who are interested in large-scale monitoring of their audience for well-informed decision-making.
Observing inconsistent results in prior studies, this paper applies the elaboration likelihood model to investigate the impact of affective and cognitive cues embedded in social media messages on audience engagement during a political event. Leveraging a rich dataset in the context of the 2020 U.S. presidential elections containing more than 3 million tweets, we found the prominence of both cue types. For the overall sample, positivity and sentiment are negatively related to engagement. In contrast, the post-hoc sub-sample analysis of tweets from famous users shows that emotionally charged content is more engaging. The role of sentiment decreases when the number of followers grows and ultimately becomes insignificant for Twitter participants with a vast number of followers. Prosocial orientation (“we-talk”) is consistently associated with more likes, comments, and retweets in the overall sample and sub-samples.
During the outbreak of the COVID-19 pandemic, many people shared their symptoms across Online Social Networks (OSNs) like Twitter, hoping for others’ advice or moral support. Prior studies have shown that those who disclose health-related information across OSNs often tend to regret it and delete their publications afterwards. Hence, deleted posts containing sensitive data can be seen as manifestations of online regrets. In this work, we present an analysis of deleted content on Twitter during the outbreak of the COVID-19 pandemic. For this, we collected more than 3.67 million tweets describing COVID-19 symptoms (e.g., fever, cough, and fatigue) posted between January and April 2020. We observed that around 24% of the tweets containing personal pronouns were deleted either by their authors or by the platform after one year.
As a practical application of the resulting dataset, we explored its suitability for the automatic classification of regrettable content on Twitter.
Fighting false information
(2023)
The digital transformation poses challenges for public sector organizations (PSOs) such as the dissemination of false information in social media which can cause uncertainty among citizens and decrease trust in the public sector. Some PSOs already successfully deploy conversational agents (CAs) to communicate with citizens and support digital service delivery. In this paper, we used design science research (DSR) to examine how CAs could be designed to assist PSOs in fighting false information online. We conducted a workshop with the municipality of Kristiansand, Norway to define objectives that a CA would have to meet for addressing the identified false information challenges. A prototypical CA was developed and evaluated in two iterations with the municipality and students from Norway. This research-in-progress paper presents findings and next steps of the DSR process. This research contributes to advancing the digital transformation of the public sector in combating false information problems.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
Enterprise Resource Planning (ERP) systems are critical to the success of enterprises, facilitating business operations through standardized digital processes. However, existing ERP systems are unsuitable for startups and small and medium-sized enterprises that grow quickly and require adaptable solutions with low barriers to entry. Drawing upon 15 explorative interviews with industry experts, we examine the challenges of current ERP systems using the task technology fit theory across companies of varying sizes. We describe high entry barriers, high costs of implementing implicit processes, and insufficient interoperability of already employed tools. We present a vision of a future business process platform based on three enablers: Business processes as first-class entities, semantic data and processes, and cloud-native elasticity and high availability. We discuss how these enablers address current ERP systems' challenges and how they may be used for research on the next generation of business software for tomorrow's enterprises.
While Information Systems (IS) Research on the individual and workgroup level of analysis is omnipresent, research on the enterprise-level IS is less frequent. Even though research on Enterprise Systems and their management is established in academic associations and conference programs, enterprise-level phenomena are underrepresented. This minitrack provides a forum to integrate existing research streams that traditionally needed to be attached to other topics (such as IS management or IS governance). The minitrack received broad attention. The three selected papers address different facets of the future role of enterprise-wide IS including aspects such as carbonization, ecosystem integration, and technology-organization fit.
Virtual reality can have advantages for education and learning. However, it must be adequately designed so that the learner benefits from the technological possibilities. Understanding the underlying effects of the virtual learning environment and the learner’s prior experience with virtual reality or prior knowledge of the content is necessary to design a proper virtual learning environment. This article presents a pre-study testing the design of a virtual learning environment for engineering vocational training courses. In the pre-study, 12 employees of two companies joined the training course in one of the two degrees of immersion (desktop VR and VR HMD). Quantitative results on learning success, cognitive load, usability, and motivation and qualitative learning process data were presented. The qualitative data assessment shows that overall, the employees were satisfied with the learning environment regardless of the level of immersion and that the participants asked for more guidance and structure accompanying the learning process. Further research is needed to test for solid group differences.
The rise of open source models for software and hardware development has catalyzed the debate regarding sustainable business models. Open Source Software has already become a dominant part in the software industry, whereas Open Source Hardware is still a little-researched phenomenon but has the potential to do the same to manufacturing in a wide range of products. This article addresses this potential by introducing a research design to analyze the prototyping phase of six different Open Source Hardware projects tackling ecological, social, and economical challenges. Using a design science research methodology, a process model is developed to concretise the prototype development steps. The prototype phase is important because it is where fundamental decisions are made that affect the openness of the final product. This paper aims to advance the discourse on open production as a concept that enables companies to apply the aspect of openness towards collaboration-oriented and sustainable business models.
Faced with the triad of time-cost-quality, the realization of production tasks under economic conditions is not trivial. Since the number of Artificial-Intelligence-(AI)-based applications in business processes is increasing more and more nowadays, the efficient design of AI cases for production processes as well as their target-oriented improvement is essential, so that production outcomes satisfy high quality criteria and economic requirements. Both challenge production management and data scientists, aiming to assign ideal manifestations of artificial neural networks (ANNs) to a certain task. Faced with new attempts of ANN-based production process improvements [8], this paper continues research about the optimal creation, provision and utilization of ANNs. Moreover, it presents a mechanism for AI case-based reasoning for ANNs. Experiments clarify continuously improving ANN knowledge bases by this mechanism empirically. Its proof-of-concept is demonstrated by the example of four production simulation scenarios, which cover the most relevant use cases and will be the basis for examining AI cases on a quantitative level.
With larger artificial neural networks (ANN) and deeper neural architectures, common methods for training ANN, such as backpropagation, are key to learning success. Their role becomes particularly important when interpreting and controlling structures that evolve through machine learning. This work aims to extend previous research on backpropagation-based methods by presenting a modified, full-gradient version of the backpropagation learning algorithm that preserves (or rather crystallizes) selected neural weights while leaving other weights adaptable (or rather fluid). In a design-science-oriented manner, a prototype of a feedforward ANN is demonstrated and refined using the new learning method. The results show that the so-called crystallizing backpropagation increases the control possibilities of neural structures and interpretation chances, while learning can be carried out as usual. Since neural hierarchies are established because of the algorithm, ANN compartments start to function in terms of cognitive levels. This study shows the importance of dealing with ANN in hierarchies through backpropagation and brings in learning methods as novel ways of interacting with ANN. Practitioners will benefit from this interactive process because they can restrict neural learning to specific architectural components of ANN and can focus further development on specific areas of higher cognitive levels without the risk of destroying valuable ANN structures.
The management of knowledge in organizations considers both established long-term
processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
Developing a new product generation requires the transfer of knowledge among various knowledge carriers. Several factors influence knowledge transfer, e.g., the complexity of engineering tasks or the competence of employees, which can decrease the efficiency and effectiveness of knowledge transfers in product engineering. Hence, improving those knowledge transfers obtains great potential, especially against the backdrop of experienced employees leaving the company due to retirement, so far, research results show, that the knowledge transfer velocity can be raised by following the Knowledge Transfer Velocity Model and implementing so-called interventions in a product engineering context. In most cases, the implemented interventions have a positive effect on knowledge transfer speed improvement. In addition to that, initial theoretical findings describe factors influencing the quality of knowledge transfers and outline a setting to empirically investigate how the quality can be improved by introducing a general description of knowledge transfer reference situations and principles to measure the quality of knowledge artifacts. To assess the quality of knowledge transfers in a product engineering context, the Knowledge Transfer Quality Model (KTQM) is created, which serves as a basis to develop and implement quality-dependent interventions for different knowledge transfer situations. As a result, this paper introduces the specifications of eight situation-adequate interventions to improve the quality of knowledge transfers in product engineering following an intervention template. Those interventions are intended to be implemented in an industrial setting to measure the quality of knowledge transfers and validate their effect.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.
An increasing number of clinicians (i.e., nurses and physicians) suffer from mental health-related issues like depression and burnout. These, in turn, stress communication, collaboration, and decision- making—areas in which Conversational Agents (CAs) have shown to be useful. Thus, in this work, we followed a mixed-method approach and systematically analysed the literature on factors affecting the well-being of clinicians and CAs’ potential to improve said well-being by relieving support in communication, collaboration, and decision-making in hospitals. In this respect, we are guided by Brigham et al. (2018)’s model of factors influencing well-being. Based on an initial number of 840 articles, we further analysed 52 papers in more detail and identified the influences of CAs’ fields of application on external and individual factors affecting clinicians’ well-being. As our second method, we will conduct interviews with clinicians and experts on CAs to verify and extend these influencing factors.
Enterprise solutions, specifically enterprise systems, have allowed companies to integrate enterprises’ operations throughout. The integration scope of enterprise solutions has increasingly widened, now often covering customer activities, activities along supply chains, and platform ecosystems. IS research has contributed a wide range of explanatory and design knowledge dealing with this class of IS. During the last two decades, many technological as well as managerial/organizational innovations extended the affordances of enterprise solutions—but this broader scope also challenges traditional approaches to their analysis and design. This position paper presents an enterprise-level (i.e., cross-solution) perspective on IS, discusses the challenges of complexity and coordination for IS design and management, presents selected enterprise-level insights for IS coordination and governance, and explores avenues towards a more comprehensive body of knowledge on this important level of analysis.
While Information Systems Research exists at the individual and workgroup levels, research on IS at the enterprise level is less common. The potential synergies between the study of enterprise systems (ES) and related fields have been underexplored and often treated as separate entities. The ongoing challenge is to seamlessly integrate technological advances and align business processes across organizations. While systems integration within an organization is common, changes occur when industry and ecosystem perspectives come into play. The four selected papers address different facets of the future role of enterprise ecosystems, including implementation challenges, ecosystem boundaries, and B2B platform specifics.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.