Refine
Has Fulltext
- no (3)
Year of publication
- 2023 (3) (remove)
Document Type
- Conference Proceeding (3) (remove)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- crisis communication (2)
- COVID-19 (1)
- artificial intelligence (1)
- augmentation (1)
- conversational agents (1)
- deleted tweets (1)
- ethics (1)
- false information (1)
- human-AI interaction (1)
- media literacy (1)
Institute
Fighting false information
(2023)
The digital transformation poses challenges for public sector organizations (PSOs) such as the dissemination of false information in social media which can cause uncertainty among citizens and decrease trust in the public sector. Some PSOs already successfully deploy conversational agents (CAs) to communicate with citizens and support digital service delivery. In this paper, we used design science research (DSR) to examine how CAs could be designed to assist PSOs in fighting false information online. We conducted a workshop with the municipality of Kristiansand, Norway to define objectives that a CA would have to meet for addressing the identified false information challenges. A prototypical CA was developed and evaluated in two iterations with the municipality and students from Norway. This research-in-progress paper presents findings and next steps of the DSR process. This research contributes to advancing the digital transformation of the public sector in combating false information problems.
During the outbreak of the COVID-19 pandemic, many people shared their symptoms across Online Social Networks (OSNs) like Twitter, hoping for others’ advice or moral support. Prior studies have shown that those who disclose health-related information across OSNs often tend to regret it and delete their publications afterwards. Hence, deleted posts containing sensitive data can be seen as manifestations of online regrets. In this work, we present an analysis of deleted content on Twitter during the outbreak of the COVID-19 pandemic. For this, we collected more than 3.67 million tweets describing COVID-19 symptoms (e.g., fever, cough, and fatigue) posted between January and April 2020. We observed that around 24% of the tweets containing personal pronouns were deleted either by their authors or by the platform after one year.
As a practical application of the resulting dataset, we explored its suitability for the automatic classification of regrettable content on Twitter.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.