Refine
Has Fulltext
- no (27) (remove)
Document Type
- Article (22)
- Conference Proceeding (3)
- Other (1)
- Report (1)
Is part of the Bibliography
- yes (27)
Keywords
- artificial intelligence (2)
- crisis communication (2)
- identity theory (2)
- COVID-19 (1)
- Germany (1)
- Twitter (1)
- augmentation (1)
- bottleneck (1)
- conversational agents (1)
- corporate nomadism (1)
Institute
Digital transformation fundamentally changes the way individuals conduct work in organisations. In accordance with this statement, prevalent literature understands digital workplace transformation as a second-order effect of implementing new information technology to increase organisational effectiveness or reach other strategic goals. This paper, in contrast, provides empirical evidence from two remote-first organisations that undergo a proactive rather than reactive digital workplace transformation. The analysis of these cases suggests that new ways of working can be the consequence of an identity change that is a precondition for introducing new information technology rather than its outcome. The resulting process model contributes a competing argument to the existing debate in digital transformation literature. Instead of issuing digital workplace transformation as a deliverable of technological progress and strategic goals, this paper supports a notion of digital workplace transformation that serves a desired identity based on work preferences.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.
During the outbreak of the COVID-19 pandemic, many people shared their symptoms across Online Social Networks (OSNs) like Twitter, hoping for others’ advice or moral support. Prior studies have shown that those who disclose health-related information across OSNs often tend to regret it and delete their publications afterwards. Hence, deleted posts containing sensitive data can be seen as manifestations of online regrets. In this work, we present an analysis of deleted content on Twitter during the outbreak of the COVID-19 pandemic. For this, we collected more than 3.67 million tweets describing COVID-19 symptoms (e.g., fever, cough, and fatigue) posted between January and April 2020. We observed that around 24% of the tweets containing personal pronouns were deleted either by their authors or by the platform after one year.
As a practical application of the resulting dataset, we explored its suitability for the automatic classification of regrettable content on Twitter.
The study explores differences between three user types in the top tweets about the 2015 “refugee crisis” in Germany and presents the results of a quantitative content analysis. All tweets with the keyword “Flüchtlinge” posted for a monthlong period following September 13, 2015, the day Germany decided to implement border controls, were collected (N = 763,752). The top 2,495 tweets according to number of retweets were selected for analysis. Differences between news media, public and private actor tweets in topics, tweet characteristics such as tone and opinion expression, links, and specific sentiments toward refugees were analyzed. We found strong differences between the tweets. Public actor tweets were the main source of positive sentiment toward refugees and the main information source on refugee support. News media tweets mostly reflected traditional journalistic norms of impartiality and objectivity, whereas private actor tweets were more diverse in sentiments toward refugees.
Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.
Fighting false information
(2023)
The digital transformation poses challenges for public sector organizations (PSOs) such as the dissemination of false information in social media which can cause uncertainty among citizens and decrease trust in the public sector. Some PSOs already successfully deploy conversational agents (CAs) to communicate with citizens and support digital service delivery. In this paper, we used design science research (DSR) to examine how CAs could be designed to assist PSOs in fighting false information online. We conducted a workshop with the municipality of Kristiansand, Norway to define objectives that a CA would have to meet for addressing the identified false information challenges. A prototypical CA was developed and evaluated in two iterations with the municipality and students from Norway. This research-in-progress paper presents findings and next steps of the DSR process. This research contributes to advancing the digital transformation of the public sector in combating false information problems.