TY - CHAP A1 - Jacqmin, Julien A1 - Özdemir, Paker Doğu A1 - Fell Kurban, Caroline A1 - Tunç Pekkan, Zelha A1 - Koskinen, Johanna A1 - Suonpää, Maija A1 - Seng, Cheyvuth A1 - Carlon, May Kristine Jonson A1 - Gayed, John Maurice A1 - Cross, Jeffrey S. A1 - Langseth, Inger A1 - Jacobsen, Dan Yngve A1 - Haugsbakken, Halvdan A1 - Bethge, Joseph A1 - Serth, Sebastian A1 - Staubitz, Thomas A1 - Wuttke, Tobias A1 - Nordemann, Oliver A1 - Das, Partha-Pratim A1 - Meinel, Christoph A1 - Ponce, Eva A1 - Srinath, Sindhu A1 - Allegue, Laura A1 - Perach, Shai A1 - Alexandron, Giora A1 - Corti, Paola A1 - Baudo, Valeria A1 - Turró, Carlos A1 - Moura Santos, Ana A1 - Nilsson, Charlotta A1 - Maldonado-Mahauad, Jorge A1 - Valdiviezo, Javier A1 - Carvallo, Juan Pablo A1 - Samaniego-Erazo, Nicolay A1 - Poce, Antonella A1 - Re, Maria Rosaria A1 - Valente, Mara A1 - Karp Gershon, Sa’ar A1 - Ruipérez-Valiente, José A. A1 - Despujol, Ignacio A1 - Busquets, Jaime A1 - Kerr, John A1 - Lorenz, Anja A1 - Schön, Sandra A1 - Ebner, Martin A1 - Wittke, Andreas A1 - Beirne, Elaine A1 - Nic Giolla Mhichíl, Mairéad A1 - Brown, Mark A1 - Mac Lochlainn, Conchúr A1 - Topali, Paraskevi A1 - Chounta, Irene-Angelica A1 - Ortega-Arranz, Alejandro A1 - Villagrá-Sobrino, Sara L. A1 - Martínez-Monés, Alejandra A1 - Blackwell, Virginia Katherine A1 - Wiltrout, Mary Ellen A1 - Rami Gaddem, Mohamed A1 - Hernández Reyes, César Augusto A1 - Nagahama, Toru A1 - Buchem, Ilona A1 - Okatan, Ebru A1 - Khalil, Mohammad A1 - Casiraghi, Daniela A1 - Sancassani, Susanna A1 - Brambilla, Federica A1 - Mihaescu, Vlad A1 - Andone, Diana A1 - Vasiu, Radu A1 - Şahin, Muhittin A1 - Egloffstein, Marc A1 - Bothe, Max A1 - Rohloff, Tobias A1 - Schenk, Nathanael A1 - Schwerer, Florian A1 - Ifenthaler, Dirk A1 - Hense, Julia A1 - Bernd, Mike ED - Meinel, Christoph ED - Staubitz, Thomas ED - Schweiger, Stefanie ED - Friedl, Christian ED - Kiers, Janine ED - Ebner, Martin ED - Lorenz, Anja ED - Ubachs, George ED - Mongenet, Catherine ED - Ruipérez-Valiente, José A. ED - Cortes Mendez, Manoel T1 - EMOOCs 2021 N2 - From June 22 to June 24, 2021, Hasso Plattner Institute, Potsdam, hosted the seventh European MOOC Stakeholder Summit (EMOOCs 2021) together with the eighth ACM Learning@Scale Conference. Due to the COVID-19 situation, the conference was held fully online. The boost in digital education worldwide as a result of the pandemic was also one of the main topics of this year’s EMOOCs. All institutions of learning have been forced to transform and redesign their educational methods, moving from traditional models to hybrid or completely online models at scale. The learnings, derived from practical experience and research, have been explored in EMOOCs 2021 in six tracks and additional workshops, covering various aspects of this field. In this publication, we present papers from the conference’s Experience Track, the Policy Track, the Business Track, the International Track, and the Workshops. KW - e-learning KW - microcredential KW - MOOC KW - digital education KW - experience KW - online course design KW - online course creation KW - higher education Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-510300 SN - 978-3-86956-512-5 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - JOUR A1 - Bethge, Joseph A1 - Serth, Sebastian A1 - Staubitz, Thomas A1 - Wuttke, Tobias A1 - Nordemann, Oliver A1 - Das, Partha-Pratim A1 - Meinel, Christoph T1 - TransPipe BT - A Pipeline for Automated Transcription and Translation of Videos JF - EMOOCs 2021 N2 - Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-516943 VL - 2021 SP - 79 EP - 94 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - GEN A1 - Bartz, Christian A1 - Yang, Haojin A1 - Bethge, Joseph A1 - Meinel, Christoph T1 - LoANs BT - Weakly Supervised Object Detection with Localizer Assessor Networks T2 - Computer Vision – ACCV 2018 Workshops N2 - Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data. Y1 - 2019 SN - 978-3-030-21074-8 SN - 978-3-030-21073-1 U6 - https://doi.org/10.1007/978-3-030-21074-8_29 SN - 0302-9743 SN - 1611-3349 VL - 11367 SP - 341 EP - 356 PB - Springer CY - Cham ER -