TY - JOUR A1 - Ziegler, Joceline A1 - Pfitzner, Bjarne A1 - Schulz, Heinrich A1 - Saalbach, Axel A1 - Arnrich, Bert T1 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data JF - Sensors N2 - Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training. KW - federated learning KW - privacy and security KW - privacy attack KW - X-ray Y1 - 2022 U6 - https://doi.org/10.3390/s22145195 SN - 1424-8220 VL - 22 PB - MDPI CY - Basel, Schweiz ET - 14 ER - TY - GEN A1 - Ziegler, Joceline A1 - Pfitzner, Bjarne A1 - Schulz, Heinrich A1 - Saalbach, Axel A1 - Arnrich, Bert T1 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data T2 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät N2 - Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training. T3 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät - 14 KW - federated learning KW - privacy and security KW - privacy attack KW - X-ray Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-581322 IS - 14 ER - TY - GEN A1 - Zenner, Alexander M. A1 - Böttinger, Erwin A1 - Konigorski, Stefan T1 - StudyMe BT - a new mobile app for user-centric N-of-1 trials T2 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät N2 - N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, there is a gap in supporting non-experts in conducting their own user-centric trials. In this study, we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe, focusing on trial creation. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests, we developed StudyMe that features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives. T3 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät - 18 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-589763 IS - 18 ER - TY - JOUR A1 - Zenner, Alexander M. A1 - Böttinger, Erwin A1 - Konigorski, Stefan T1 - StudyMe BT - a new mobile app for user-centric N-of-1 trials JF - Trials N2 - N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals. Although several tools for N-of-1 trials exist, there is a gap in supporting non-experts in conducting their own user-centric trials. In this study, we present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me and offers users flexibility and guidance in configuring every component of their trials. We also present research that informed the development of StudyMe, focusing on trial creation. Through an initial survey with 272 participants, we learned that individuals are interested in a variety of personal health aspects and have unique ideas on how to improve them. In an iterative, user-centered development process with intermediate user tests, we developed StudyMe that features an educational part to communicate N-of-1 trial concepts. A final empirical evaluation of StudyMe showed that all participants were able to create their own trials successfully using StudyMe and the app achieved a very good usability rating. Our findings suggest that StudyMe provides a significant step towards enabling individuals to apply a systematic science-oriented approach to personalize health-related interventions and behavior modifications in their everyday lives. Y1 - 2022 U6 - https://doi.org/10.1186/s13063-022-06893-7 SN - 1745-6215 VL - 23 PB - BioMed Central CY - London ER - TY - JOUR A1 - Wittig, Alice A1 - Miranda, Fabio Malcher A1 - Hölzer, Martin A1 - Altenburg, Tom A1 - Bartoszewicz, Jakub Maciej A1 - Beyvers, Sebastian A1 - Dieckmann, Marius Alfred A1 - Genske, Ulrich A1 - Giese, Sven Hans-Joachim A1 - Nowicka, Melania A1 - Richard, Hugues A1 - Schiebenhoefer, Henning A1 - Schmachtenberg, Anna-Juliane A1 - Sieben, Paul A1 - Tang, Ming A1 - Tembrockhaus, Julius A1 - Renard, Bernhard Y. A1 - Fuchs, Stephan T1 - CovRadar BT - continuously tracking and filtering SARS-CoV-2 mutations for genomic surveillance JF - Bioinformatics N2 - The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast. Y1 - 2022 U6 - https://doi.org/10.1093/bioinformatics/btac411 SN - 1367-4803 SN - 1367-4811 VL - 38 IS - 17 SP - 4223 EP - 4225 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Wiemker, Veronika A1 - Bunova, Anna A1 - Neufeld, Maria A1 - Gornyi, Boris A1 - Yurasova, Elena A1 - Konigorski, Stefan A1 - Kalinina, Anna A1 - Kontsevaya, Anna A1 - Ferreira-Borges, Carina A1 - Probst, Charlotte T1 - Pilot study to evaluate usability and acceptability of the 'Animated Alcohol Assessment Tool' in Russian primary healthcare JF - Digital health N2 - Background and aims: Accurate and user-friendly assessment tools quantifying alcohol consumption are a prerequisite to effective prevention and treatment programmes, including Screening and Brief Intervention. Digital tools offer new potential in this field. We developed the ‘Animated Alcohol Assessment Tool’ (AAA-Tool), a mobile app providing an interactive version of the World Health Organization's Alcohol Use Disorders Identification Test (AUDIT) that facilitates the description of individual alcohol consumption via culturally informed animation features. This pilot study evaluated the Russia-specific version of the Animated Alcohol Assessment Tool with regard to (1) its usability and acceptability in a primary healthcare setting, (2) the plausibility of its alcohol consumption assessment results and (3) the adequacy of its Russia-specific vessel and beverage selection. Methods: Convenience samples of 55 patients (47% female) and 15 healthcare practitioners (80% female) in 2 Russian primary healthcare facilities self-administered the Animated Alcohol Assessment Tool and rated their experience on the Mobile Application Rating Scale – User Version. Usage data was automatically collected during app usage, and additional feedback on regional content was elicited in semi-structured interviews. Results: On average, patients completed the Animated Alcohol Assessment Tool in 6:38 min (SD = 2.49, range = 3.00–17.16). User satisfaction was good, with all subscale Mobile Application Rating Scale – User Version scores averaging >3 out of 5 points. A majority of patients (53%) and practitioners (93%) would recommend the tool to ‘many people’ or ‘everyone’. Assessed alcohol consumption was plausible, with a low number (14%) of logically impossible entries. Most patients reported the Animated Alcohol Assessment Tool to reflect all vessels (78%) and all beverages (71%) they typically used. Conclusion: High acceptability ratings by patients and healthcare practitioners, acceptable completion time, plausible alcohol usage assessment results and perceived adequacy of region-specific content underline the Animated Alcohol Assessment Tool's potential to provide a novel approach to alcohol assessment in primary healthcare. After its validation, the Animated Alcohol Assessment Tool might contribute to reducing alcohol-related harm by facilitating Screening and Brief Intervention implementation in Russia and beyond. KW - Alcohol use assessment KW - Alcohol Use Disorders Identification Test KW - screening tools KW - digital health KW - mobile applications KW - Russia KW - primary healthcare KW - usability KW - acceptability Y1 - 2022 U6 - https://doi.org/10.1177/20552076211074491 SN - 2055-2076 VL - 8 PB - Sage Publications CY - London ER - TY - JOUR A1 - Ulrich, Jens-Uwe A1 - Lutfi, Ahmad A1 - Rutzen, Kilian A1 - Renard, Bernhard Y. T1 - ReadBouncer BT - precise and scalable adaptive sampling for nanopore sequencing JF - Bioinformatics N2 - Motivation: Nanopore sequencers allow targeted sequencing of interesting nucleotide sequences by rejecting other sequences from individual pores. This feature facilitates the enrichment of low-abundant sequences by depleting overrepresented ones in-silico. Existing tools for adaptive sampling either apply signal alignment, which cannot handle human-sized reference sequences, or apply read mapping in sequence space relying on fast graphical processing units (GPU) base callers for real-time read rejection. Using nanopore long-read mapping tools is also not optimal when mapping shorter reads as usually analyzed in adaptive sampling applications. Results: Here, we present a new approach for nanopore adaptive sampling that combines fast CPU and GPU base calling with read classification based on Interleaved Bloom Filters. ReadBouncer improves the potential enrichment of low abundance sequences by its high read classification sensitivity and specificity, outperforming existing tools in the field. It robustly removes even reads belonging to large reference sequences while running on commodity hardware without GPUs, making adaptive sampling accessible for in-field researchers. Readbouncer also provides a user-friendly interface and installer files for end-users without a bioinformatics background. Y1 - 2022 U6 - https://doi.org/10.1093/bioinformatics/btac223 SN - 1367-4803 SN - 1367-4811 VL - 38 IS - SUPPL 1 SP - 153 EP - 160 PB - Oxford Univ. Press CY - Oxford ER - TY - JOUR A1 - Tang, Mitchell A1 - Nakamoto, Carter H. A1 - Stern, Ariel Dora A1 - Mehrotra, Ateev T1 - Trends in remote patient monitoring use in traditional Medicare JF - JAMA Internal Medicine N2 - This cross-sectional study uses traditional Medicare claims data to assess trends in general remote patient monitoring from January 2018 through September 2021. Y1 - 2022 U6 - https://doi.org/10.1001/jamainternmed.2022.3043 SN - 2168-6106 SN - 2168-6114 VL - 182 IS - 9 SP - 1005 EP - 1006 PB - American Veterinary Medical Association CY - Chicago ER - TY - THES A1 - Sukmana, Muhammad Ihsan Haikal T1 - Security improvements for enterprise file sychronization and sharing system T1 - Sicherheitsverbesserungen für Enterprise File Synchronization und Sharing System N2 - With the fast rise of cloud computing adoption in the past few years, more companies are migrating their confidential files from their private data center to the cloud to help enterprise's digital transformation process. Enterprise file synchronization and share (EFSS) is one of the solutions offered for enterprises to store their files in the cloud with secure and easy file sharing and collaboration between its employees. However, the rapidly increasing number of cyberattacks on the cloud might target company's files on the cloud to be stolen or leaked to the public. It is then the responsibility of the EFSS system to ensure the company's confidential files to only be accessible by authorized employees. CloudRAID is a secure personal cloud storage research collaboration project that provides data availability and confidentiality in the cloud. It combines erasure and cryptographic techniques to securely store files as multiple encrypted file chunks in various cloud service providers (CSPs). However, several aspects of CloudRAID's concept are unsuitable for secure and scalable enterprise cloud storage solutions, particularly key management system, location-based access control, multi-cloud storage management, and cloud file access monitoring. This Ph.D. thesis focuses on CloudRAID for Business (CfB) as it resolves four main challenges of CloudRAID's concept for a secure and scalable EFSS system. First, the key management system is implemented using the attribute-based encryption scheme to provide secure and scalable intra-company and inter-company file-sharing functionalities. Second, an Internet-based location file access control functionality is introduced to ensure files could only be accessed at pre-determined trusted locations. Third, a unified multi-cloud storage resource management framework is utilized to securely manage cloud storage resources available in various CSPs for authorized CfB stakeholders. Lastly, a multi-cloud storage monitoring system is introduced to monitor the activities of files in the cloud using the generated cloud storage log files from multiple CSPs. In summary, this thesis helps CfB system to provide holistic security for company's confidential files on the cloud-level, system-level, and file-level to ensure only authorized company and its employees could access the files. N2 - Mit der raschen Verbreitung von Cloud Computing in den letzten Jahren verlagern immer mehr Unternehmen ihre vertraulichen Dateien von ihren privaten Rechenzentren in die Cloud, um den digitalen Transformationsprozess des Unternehmens zu unterstützen. Enterprise File Synchronization and Share (EFSS) ist eine der Lösungen, die Unternehmen angeboten werden, um ihre Dateien in der Cloud zu speichern und so eine sichere und einfache gemeinsame Nutzung von Dateien und die Zusammenarbeit zwischen den Mitarbeitern zu ermöglichen. Die schnell wachsende Zahl von Cyberangriffen auf die Cloud kann jedoch dazu führen, dass die in der Cloud gespeicherten Unternehmensdateien gestohlen werden oder an die Öffentlichkeit gelangen. Es liegt dann in der Verantwortung des EFSS-Systems, sicherzustellen, dass die vertraulichen Dateien des Unternehmens nur für autorisierte Mitarbeiter zugänglich sind. CloudRAID ist ein Forschungsprojekt für sichere persönliche Cloud-Speicher, das die Verfügbarkeit und Vertraulichkeit von Daten in der Cloud gewährleistet. Es kombiniert Lösch- und Verschlüsselungstechniken, um Dateien in Form von mehreren verschlüsselten Datei-Blöcken bei verschiedenen Cloud-Service-Providern (CSPs) sicher zu speichern. Mehrere Aspekte des CloudRAID-Konzepts sind jedoch für sichere und skalierbare Cloud-Speicherlösungen für Unternehmen ungeeignet, insbesondere das Schlüsselverwaltungssystem, die standortbasierte Zugriffskontrolle, die Verwaltung mehrerer Cloud-Speicher und die Überwachung des Zugriffs auf Cloud-Dateien. Diese Doktorarbeit konzentriert sich auf CloudRAID for Business (CfB), da es die vier wichtigsten Herausforderungen des CloudRAID-Konzepts für ein sicheres und skalierbares EFSS-System löst. Erstens wird das Verwaltungssystem der kryptografischen Schlüssel unter Verwendung des attributbasierten Verschlüsselungsschemas implementiert, um sichere und skalierbare unternehmensinterne und -übergreifende Dateifreigabefunktionen bereitzustellen. Zweitens wird eine internetbasierte Dateizugriffskontrolle eingeführt, um sicherzustellen, dass der Zugriff auf Dateien nur an vorher festgelegten vertrauenswürdigen Standorten möglich ist. Drittens wird ein einheitlicher Rahmen für die Verwaltung von Multi-Cloud-Speicherressourcen verwendet, um die in verschiedenen CSPs verfügbaren Cloud-Speicherressourcen für autorisierte CfB-Akteure sicher zu verwalten. Schließlich wird ein Multi-Cloud-Storage-Monitoring-System eingeführt, um die Aktivitäten von Dateien in der Cloud anhand der von mehreren CSPs generierten Cloud-Storage-Protokolldateien zu überwachen. Zusammenfassend lässt sich sagen, dass diese Arbeit dem CfB-System hilft, ganzheitliche Sicherheit für vertrauliche Unternehmensdateien auf Cloud-, System- und Dateiebene zu bieten, um sicherzustellen, dass nur autorisierte Unternehmen und ihre Mitarbeiter auf die Dateien zugreifen können. KW - CloudRAID KW - CloudRAID for Business KW - Cloud Computing KW - Cybersecurity KW - Cryptography KW - Access Control KW - Enterprise File Synchronization and Share KW - Zugriffskontrolle KW - Cloud Computing KW - CloudRAID KW - CloudRAID for Business KW - Kryptografie KW - Cybersicherheit KW - Unternehmensdateien synchronisieren und teilen Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-549996 ER - TY - JOUR A1 - Shams, Boshra A1 - Wang, Ziqian A1 - Roine, Timo A1 - Aydogan, Dogu Baran A1 - Vajkoczy, Peter A1 - Lippert, Christoph A1 - Picht, Thomas A1 - Fekonja, Lucius Samo T1 - Machine learning-based prediction of motor status in glioma patients using diffusion MRI metrics along the corticospinal tract JF - Brain communications N2 - Shams et al. report that glioma patients' motor status is predicted accurately by diffusion MRI metrics along the corticospinal tract based on support vector machine method, reaching an overall accuracy of 77%. They show that these metrics are more effective than demographic and clinical variables. Along tract statistics enables white matter characterization using various diffusion MRI metrics. These diffusion models reveal detailed insights into white matter microstructural changes with development, pathology and function. Here, we aim at assessing the clinical utility of diffusion MRI metrics along the corticospinal tract, investigating whether motor glioma patients can be classified with respect to their motor status. We retrospectively included 116 brain tumour patients suffering from either left or right supratentorial, unilateral World Health Organization Grades II, III and IV gliomas with a mean age of 53.51 +/- 16.32 years. Around 37% of patients presented with preoperative motor function deficits according to the Medical Research Council scale. At group level comparison, the highest non-overlapping diffusion MRI differences were detected in the superior portion of the tracts' profiles. Fractional anisotropy and fibre density decrease, apparent diffusion coefficient axial diffusivity and radial diffusivity increase. To predict motor deficits, we developed a method based on a support vector machine using histogram-based features of diffusion MRI tract profiles (e.g. mean, standard deviation, kurtosis and skewness), following a recursive feature elimination method. Our model achieved high performance (74% sensitivity, 75% specificity, 74% overall accuracy and 77% area under the curve). We found that apparent diffusion coefficient, fractional anisotropy and radial diffusivity contributed more than other features to the model. Incorporating the patient demographics and clinical features such as age, tumour World Health Organization grade, tumour location, gender and resting motor threshold did not affect the model's performance, revealing that these features were not as effective as microstructural measures. These results shed light on the potential patterns of tumour-related microstructural white matter changes in the prediction of functional deficits. KW - machine learning KW - support vector machine KW - tractography KW - diffusion MRI; KW - corticospinal tract Y1 - 2022 U6 - https://doi.org/10.1093/braincomms/fcac141 SN - 2632-1297 VL - 4 IS - 3 PB - Oxford University Press CY - Oxford ER - TY - GEN A1 - Serth, Sebastian A1 - Staubitz, Thomas A1 - van Elten, Martin A1 - Meinel, Christoph ED - Gamage, Dilrukshi T1 - Measuring the effects of course modularizations in online courses for life-long learners T2 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät N2 - Many participants in Massive Open Online Courses are full-time employees seeking greater flexibility in their time commitment and the available learning paths. We recently addressed these requirements by splitting up our 6-week courses into three 2-week modules followed by a separate exam. Modularizing courses offers many advantages: Shorter modules are more sustainable and can be combined, reused, and incorporated into learning paths more easily. Time flexibility for learners is also improved as exams can now be offered multiple times per year, while the learning content is available independently. In this article, we answer the question of which impact this modularization has on key learning metrics, such as course completion rates, learning success, and no-show rates. Furthermore, we investigate the influence of longer breaks between modules on these metrics. According to our analysis, course modules facilitate more selective learning behaviors that encourage learners to focus on topics they are the most interested in. At the same time, participation in overarching exams across all modules seems to be less appealing compared to an integrated exam of a 6-week course. While breaks between the modules increase the distinctive appearance of individual modules, a break before the final exam further reduces initial interest in the exams. We further reveal that participation in self-paced courses as a preparation for the final exam is unlikely to attract new learners to the course offerings, even though learners' performance is comparable to instructor-paced courses. The results of our long-term study on course modularization provide a solid foundation for future research and enable educators to make informed decisions about the design of their courses. T3 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät - 17 KW - Massive Open Online Course (MOOC) KW - course design KW - modularization KW - learning path KW - flexibility KW - e-learning KW - assignments KW - self-paced learning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-589182 IS - 17 ER - TY - JOUR A1 - Serth, Sebastian A1 - Staubitz, Thomas A1 - van Elten, Martin A1 - Meinel, Christoph ED - Gamage, Dilrukshi T1 - Measuring the effects of course modularizations in online courses for life-long learners JF - Frontiers in Education N2 - Many participants in Massive Open Online Courses are full-time employees seeking greater flexibility in their time commitment and the available learning paths. We recently addressed these requirements by splitting up our 6-week courses into three 2-week modules followed by a separate exam. Modularizing courses offers many advantages: Shorter modules are more sustainable and can be combined, reused, and incorporated into learning paths more easily. Time flexibility for learners is also improved as exams can now be offered multiple times per year, while the learning content is available independently. In this article, we answer the question of which impact this modularization has on key learning metrics, such as course completion rates, learning success, and no-show rates. Furthermore, we investigate the influence of longer breaks between modules on these metrics. According to our analysis, course modules facilitate more selective learning behaviors that encourage learners to focus on topics they are the most interested in. At the same time, participation in overarching exams across all modules seems to be less appealing compared to an integrated exam of a 6-week course. While breaks between the modules increase the distinctive appearance of individual modules, a break before the final exam further reduces initial interest in the exams. We further reveal that participation in self-paced courses as a preparation for the final exam is unlikely to attract new learners to the course offerings, even though learners' performance is comparable to instructor-paced courses. The results of our long-term study on course modularization provide a solid foundation for future research and enable educators to make informed decisions about the design of their courses. KW - Massive Open Online Course (MOOC) KW - course design KW - modularization KW - learning path KW - flexibility KW - e-learning KW - assignments KW - self-paced learning Y1 - 2022 U6 - https://doi.org/10.3389/feduc.2022.1008545 SN - 2504-284X VL - 7 PB - Frontiers CY - Lausanne, Schweiz ER - TY - BOOK A1 - Schneider, Sven A1 - Maximova, Maria A1 - Giese, Holger T1 - Invariant Analysis for Multi-Agent Graph Transformation Systems using k-Induction N2 - The analysis of behavioral models such as Graph Transformation Systems (GTSs) is of central importance in model-driven engineering. However, GTSs often result in intractably large or even infinite state spaces and may be equipped with multiple or even infinitely many start graphs. To mitigate these problems, static analysis techniques based on finite symbolic representations of sets of states or paths thereof have been devised. We focus on the technique of k-induction for establishing invariants specified using graph conditions. To this end, k-induction generates symbolic paths backwards from a symbolic state representing a violation of a candidate invariant to gather information on how that violation could have been reached possibly obtaining contradictions to assumed invariants. However, GTSs where multiple agents regularly perform actions independently from each other cannot be analyzed using this technique as of now as the independence among backward steps may prevent the gathering of relevant knowledge altogether. In this paper, we extend k-induction to GTSs with multiple agents thereby supporting a wide range of additional GTSs. As a running example, we consider an unbounded number of shuttles driving on a large-scale track topology, which adjust their velocity to speed limits to avoid derailing. As central contribution, we develop pruning techniques based on causality and independence among backward steps and verify that k-induction remains sound under this adaptation as well as terminates in cases where it did not terminate before. N2 - Die Analyse von Verhaltensmodellen wie Graphtransformationssystemen (GTSs) ist von zentraler Bedeutung im Model Driven Engineering. GTSs führen jedoch häufig zu unhanhabbar großen oder sogar unendlichen Zustandsräumen und können mit mehreren oder sogar unendlich vielen Startgraphen ausgestattet sein. Um diese Probleme abzumildern, wurden statische Analysetechniken entwickelt, die auf endlichen symbolischen Darstellungen von Mengen von Zuständen oder Pfaden basieren. Wir konzentrieren uns auf die Technik der k-Induktion zur Ermittlung von Invarianten, die unter Verwendung von Graphbedingungen spezifiziert sind. Zum Zweck der Analyse erzeugt die k-Induktion symbolische Rückwärtspfade von einem symbolischen Zustand, der eine Verletzung einer Kandidateninvariante darstellt, um Informationen darüber zu sammeln, wie diese Verletzung erreicht werden konnte, wodurch möglicherweise Widersprüche zu angenommenen Invarianten gefunden werden. GTSs, bei denen mehrere Agenten regelmäßig unabhängig voneinander Aktionen ausführen, können derzeit jedoch nicht mit dieser Technik analysiert werden, da die Unabhängigkeit zwischen Rückwärtsschritten das Sammeln von relevantem Wissen möglicherweise verhindert. In diesem Artikel erweitern wir die k-Induktion auf GTSs mit mehreren Agenten und unterstützen dadurch eine breite Palette zusätzlicher GTSs. Als laufendes Beispiel betrachten wir eine unbegrenzte Anzahl von Shuttles, die auf einer großen Tracktopologie fahren und die ihre Geschwindigkeit an Geschwindigkeitsbegrenzungen anpassen, um ein Entgleisen zu vermeiden. Als zentralen Beitrag entwickeln wir Beschneidungstechniken basierend auf Kausalität und Unabhängigkeit zwischen Rückwärtsschritten und verifizieren, dass die k-Induktion unter dieser Anpassung korrekt bleibt und in Fällen terminiert, in denen sie zuvor nicht terminierte. T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 143 KW - k-inductive invariant checking KW - causality KW - parallel and sequential independence KW - symbolic analysis KW - bounded backward model checking KW - k-induktive Invariantenprüfung KW - Kausalität KW - parallele und Sequentielle Unabhängigkeit KW - symbolische Analyse KW - Bounded Backward Model Checking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-545851 SN - 978-3-86956-531-6 SN - 1613-5652 SN - 2191-1665 IS - 143 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - BOOK A1 - Schneider, Sven A1 - Maximova, Maria A1 - Giese, Holger T1 - Probabilistic metric temporal graph logic N2 - Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time. In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example. N2 - Cyber-physische Systeme umfassen häufig ein komplexes nebenläufiges Verhalten mit Zeitbeschränkungen und probabilistischen Fehlern auf Anforderung. Die Analyse, ob solche Systeme mit probabilistischem gezeitetem Verhalten einer vorgegebenen Spezifikation entsprechen, ist essentiell. Wenn die Zustände des Systems durch Graphen dargestellt werden können, kann der regelbasierte Formalismus von probabilistischen gezeiteten Graphtransformationssystemen (PTGTSs) verwendet werden, um die Strukturdynamik sowie das probabilistische und gezeitete Verhalten des Systems geeignet zu erfassen. Die Modellprüfungsunterstützung für PTGTSs bzgl. Eigenschaften, die unter Verwendung von Probabilistic Timed Computation Tree Logic (PTCTL) spezifiziert wurden, wurde bereits entwickelt. Darüber hinaus wurde das gezeitete graphenbasierte Laufzeitmonitoring mittels metrischer temporaler Graphlogik (MTGL) entwickelt, um metrische temporale Eigenschaften auf identifizierten Untergraphen und ihre strukturellen Änderungen über die Zeit zu erfassen. In diesem Artikel (a) erweitern wir MTGL auf die probabilistische metrische temporale Graphlogik (PMTGL), indem wir die Spezifikation probabilistischer Eigenschaften zulassen, (b) passen unseren MTGL-Prüfungsansatz auf PTGTSs an und (c) kombinieren die Ansätze für PTCTL-Modellprüfung und MTGL-Prüfung, um einen beschränkten Modellprüfungsansatz (BMC-Ansatz) für PMTGL zu erhalten. In unserer Auswertung wenden wir eine Implementierung unseres BMC-Ansatzes in AutoGraph auf ein Beispiel an. T3 - Technische Berichte des Hasso-Plattner-Instituts für Digital Engineering an der Universität Potsdam - 146 KW - cyber-physical systems KW - probabilistic timed systems KW - qualitative analysis KW - quantitative analysis KW - bounded model checking KW - cyber-physische Systeme KW - probabilistische gezeitete Systeme KW - qualitative Analyse KW - quantitative Analyse KW - Bounded Model Checking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-545867 SN - 978-3-86956-532-3 SN - 1613-5652 SN - 2191-1665 IS - 146 PB - Universitätsverlag Potsdam CY - Potsdam ER - TY - THES A1 - Schirneck, Friedrich Martin T1 - Enumeration algorithms in data profiling N2 - Data profiling is the extraction of metadata from relational databases. An important class of metadata are multi-column dependencies. They come associated with two computational tasks. The detection problem is to decide whether a dependency of a given type and size holds in a database. The discovery problem instead asks to enumerate all valid dependencies of that type. We investigate the two problems for three types of dependencies: unique column combinations (UCCs), functional dependencies (FDs), and inclusion dependencies (INDs). We first treat the parameterized complexity of the detection variants. We prove that the detection of UCCs and FDs, respectively, is W[2]-complete when parameterized by the size of the dependency. The detection of INDs is shown to be one of the first natural W[3]-complete problems. We further settle the enumeration complexity of the three discovery problems by presenting parsimonious equivalences with well-known enumeration problems. Namely, the discovery of UCCs is equivalent to the famous transversal hypergraph problem of enumerating the hitting sets of a hypergraph. The discovery of FDs is equivalent to the simultaneous enumeration of the hitting sets of multiple input hypergraphs. Finally, the discovery of INDs is shown to be equivalent to enumerating the satisfying assignments of antimonotone, 3-normalized Boolean formulas. In the remainder of the thesis, we design and analyze discovery algorithms for unique column combinations. Since this is as hard as the general transversal hypergraph problem, it is an open question whether the UCCs of a database can be computed in output-polynomial time in the worst case. For the analysis, we therefore focus on instances that are structurally close to databases in practice, most notably, inputs that have small solutions. The equivalence between UCCs and hitting sets transfers the computational hardness, but also allows us to apply ideas from hypergraph theory to data profiling. We devise an discovery algorithm that runs in polynomial space on arbitrary inputs and achieves polynomial delay whenever the maximum size of any minimal UCC is bounded. Central to our approach is the extension problem for minimal hitting sets, that is, to decide for a set of vertices whether they are contained in any minimal solution. We prove that this is yet another problem that is complete for the complexity class W[3], when parameterized by the size of the set that is to be extended. We also give several conditional lower bounds under popular hardness conjectures such as the Strong Exponential Time Hypothesis (SETH). The lower bounds suggest that the running time of our algorithm for the extension problem is close to optimal. We further conduct an empirical analysis of our discovery algorithm on real-world databases to confirm that the hitting set perspective on data profiling has merits also in practice. We show that the resulting enumeration times undercut their theoretical worst-case bounds on practical data, and that the memory consumption of our method is much smaller than that of previous solutions. During the analysis we make two observations about the connection between databases and their corresponding hypergraphs. On the one hand, the hypergraph representations containing all relevant information are usually significantly smaller than the original inputs. On the other hand, obtaining those hypergraphs is the actual bottleneck of any practical application. The latter often takes much longer than enumerating the solutions, which is in stark contrast to the fact that the preprocessing is guaranteed to be polynomial while the enumeration may take exponential time. To make the first observation rigorous, we introduce a maximum-entropy model for non-uniform random hypergraphs and prove that their expected number of minimal hyperedges undergoes a phase transition with respect to the total number of edges. The result also explains why larger databases may have smaller hypergraphs. Motivated by the second observation, we present a new kind of UCC discovery algorithm called Hitting Set Enumeration with Partial Information and Validation (HPIValid). It utilizes the fast enumeration times in practice in order to speed up the computation of the corresponding hypergraph. This way, we sidestep the bottleneck while maintaining the advantages of the hitting set perspective. An exhaustive empirical evaluation shows that HPIValid outperforms the current state of the art in UCC discovery. It is capable of processing databases that were previously out of reach for data profiling. N2 - Data Profiling ist die Erhebung von Metadaten über relationale Datenbanken. Eine wichtige Klasse von Metadaten sind Abhängigkeiten zwischen verschiedenen Spalten. Für diese gibt es zwei wesentliche algorithmische Probleme. Beim Detektionsproblem soll entschieden werden, ob eine Datenbank eine Abhängigkeit eines bestimmt Typs und Größe aufweist; beim Entdeckungsproblem müssen dagegen alle gültigen Abhängigkeiten aufgezählt werden. Wir behandeln beide Probleme für drei Typen von Abhängigkeiten: eindeutige Spaltenkombinationen (UCCs), funktionale Abhängigkeiten (FDs) und Inklusionsabhängigkeiten (INDs). Wir untersuchen zunächst deren parametrisierte Komplexität und beweisen, dass die Detektion von UCCs und FDs W[2]-vollständig ist, wobei die Größe der Abhängigkeit als Parameter dient. Ferner identifizieren wir die Detektion von INDs als eines der ersten natürlichen W[3]-vollständigen Probleme. Danach klären wir die Aufzählungskomplexität der drei Entdeckungsprobleme, indem wir lösungserhaltende Äquivalenzen zu bekannten Aufzählungsproblemen konstruieren. Die Entdeckung von UCCs zeigt sich dabei als äquivalent zum berühmten Transversal-Hypergraph-Problem, bei dem die Hitting Sets eines Hypergraphens aufzuzählen sind. Die Entdeckung von FDs ist äquivalent zum simultanen Aufzählen der Hitting Sets mehrerer Hypergraphen und INDs sind äquivalent zu den erfüllenden Belegungen antimonotoner, 3-normalisierter boolescher Formeln. Anschließend beschäftigen wir uns mit dem Entwurf und der Analyse von Entdeckungsalgorithmen für eindeutige Spaltenkombinationen. Es ist unbekannt, ob alle UCCs einer Datenbank in worst-case ausgabepolynomieller Zeit berechnet werden können, da dies genauso schwer ist wie das allgemeine Transversal-Hypergraph-Problem. Wir konzentrieren uns daher bei der Analyse auf Instanzen, die strukturelle Ähnlichkeiten mit Datenbanken aus der Praxis aufweisen; insbesondere solche, deren Lösungen sehr klein sind. Die Äquivalenz zwischen UCCs und Hitting Sets überträgt zwar die algorithmische Schwere, erlaubt es uns aber auch Konzepte aus der Theorie von Hypergraphen auf das Data Profiling anzuwenden. Wir entwickeln daraus einen Entdeckungsalgorithmus, dessen Berechnungen auf beliebigen Eingaben nur polynomiellen Platz benötigen. Ist zusätzlich die Maximalgröße der minimalen UCCs durch eine Konstante beschränkt, so hat der Algorithmus außerdem polynomiell beschränkten Delay. Der zentrale Baustein unseres Ansatzes ist das Erweiterbarkeitsproblem für minimale Hitting Sets, das heißt, die Entscheidung, ob eine gegebene Knotenmenge in einer minimalen Lösung vorkommt. Wir zeigen, dass dies, mit der Größe der Knotenmenge als Parameter, ein weiteres natürliches Problem ist, welches vollständig für die Komplexitätsklasse W[3] ist. Außerdem beweisen wir bedingte untere Laufzeitschranken unter der Annahme gängiger Schwere-Vermutungen wie der Starken Exponentialzeithypothese (SETH). Dies belegt, dass die Laufzeit unseres Algorithmus für das Erweiterbarkeitsproblem beinahe optimal ist. Eine empirische Untersuchung unseres Entdeckungsalgorithmus auf realen Daten bestätigt, dass die Hitting-Set-Perspektive auch praktische Vorteile für das Data Profiling hat. So sind die Berechnungzeiten für das Finden der UCCs bereits sehr schnell und der Speicherverbrauch unseres Ansatzes ist deutlich geringer als der existierender Methoden. Die Untersuchung zeigt auch zwei interessante Verbindungen zwischen Datenbanken und ihren zugehörigen Hypergraphen: Einerseits sind die Hypergraphen, die alle relevanten Informationen enthalten, meist viel kleiner als die Eingabe-Datenbanken, andererseits ist die Berechnung dieser Hypergraphen die eigentliche Engstelle in der Praxis. Sie nimmt in der Regel viel mehr Zeit in Anspruch, als das Aufzählen aller Lösungen. Dies steht im deutlichen Gegensatz zu den bekannten theoretischen Resultaten, die besagen, dass die Hypergraph-Vorberechnung polynomiell ist, während der Aufzählungsschritt exponentielle Zeit benötigen kann. Um die erste Beobachtung zu formalisieren, führen wir ein Maximum-Entropie-Modell für nicht-uniforme Hypergraphen ein und zeigen, dass die erwartete Anzahl ihrer minimalen Hyperkanten einen Phasenübergang druchläuft. Unsere Ergebnisse erklären auch warum größere Datenbanken mitunter kleinere Hypergraphen haben. Die zweite Beobachtung inspiriert uns zu einen Entdeckungsalgorithmus neuer Art, „Hitting Set Enumeration with Partial Information and Validation“ (HPIValid). Dieser nutzt die schnellen Aufzählungszeiten auf praktischen Daten aus, um die langwierige Berechnung des zu Grunde liegenden Hypergraphens zu beschleunigen. Dadurch umgehen wir die Engstelle und können gleichzeitig die Vorteile der Hitting-Set-Perspektive beibehalten. Eine ausgiebige empirische Analyse zeigt, dass HPIValid den aktuellen Stand der Technik im Bereich der UCC-Entdeckung deutlich übertrifft. HPIValid kann Datenbanken verarbeiten, für die Data Profiling zuvor unmöglich war. T2 - Aufzählungsalgorithmen für das Data Profiling KW - Chernoff-Hoeffding theorem KW - data profiling KW - enumeration algorithms KW - hitting sets KW - PhD thesis KW - transversal hypergraph KW - unique column combinations KW - Satz von Chernoff-Hoeffding KW - Dissertation KW - Data Profiling KW - Aufzählungsalgorithmen KW - Hitting Sets KW - Transversal-Hypergraph KW - eindeutige Spaltenkombination Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-556726 ER - TY - THES A1 - Rothenberger, Ralf T1 - Satisfiability thresholds for non-uniform random k-SAT T1 - Erfüllbarkeitsschwellwerte für nicht-uniformes zufälliges k-SAT N2 - Boolean Satisfiability (SAT) is one of the problems at the core of theoretical computer science. It was the first problem proven to be NP-complete by Cook and, independently, by Levin. Nowadays it is conjectured that SAT cannot be solved in sub-exponential time. Thus, it is generally assumed that SAT and its restricted version k-SAT are hard to solve. However, state-of-the-art SAT solvers can solve even huge practical instances of these problems in a reasonable amount of time. Why is SAT hard in theory, but easy in practice? One approach to answering this question is investigating the average runtime of SAT. In order to analyze this average runtime the random k-SAT model was introduced. The model generates all k-SAT instances with n variables and m clauses with uniform probability. Researching random k-SAT led to a multitude of insights and tools for analyzing random structures in general. One major observation was the emergence of the so-called satisfiability threshold: A phase transition point in the number of clauses at which the generated formulas go from asymptotically almost surely satisfiable to asymptotically almost surely unsatisfiable. Additionally, instances around the threshold seem to be particularly hard to solve. In this thesis we analyze a more general model of random k-SAT that we call non-uniform random k-SAT. In contrast to the classical model each of the n Boolean variables now has a distinct probability of being drawn. For each of the m clauses we draw k variables according to the variable distribution and choose their signs uniformly at random. Non-uniform random k-SAT gives us more control over the distribution of Boolean variables in the resulting formulas. This allows us to tailor distributions to the ones observed in practice. Notably, non-uniform random k-SAT contains the previously proposed models random k-SAT, power-law random k-SAT and geometric random k-SAT as special cases. We analyze the satisfiability threshold in non-uniform random k-SAT depending on the variable probability distribution. Our goal is to derive conditions on this distribution under which an equivalent of the satisfiability threshold conjecture holds. We start with the arguably simpler case of non-uniform random 2-SAT. For this model we show under which conditions a threshold exists, if it is sharp or coarse, and what the leading constant of the threshold function is. These are exactly the three ingredients one needs in order to prove or disprove the satisfiability threshold conjecture. For non-uniform random k-SAT with k=3 we only prove sufficient conditions under which a threshold exists. We also show some properties of the variable probabilities under which the threshold is sharp in this case. These are the first results on the threshold behavior of non-uniform random k-SAT. N2 - Das Boolesche Erfüllbarkeitsproblem (SAT) ist eines der zentralsten Probleme der theoretischen Informatik. Es war das erste Problem, dessen NP-Vollständigkeit nachgewiesen wurde, von Cook und Levin unabhängig voneinander. Heutzutage wird vermutet, dass SAT nicht in subexponentialler Zeit gelöst werden kann. Darum wird allgemein angenommen, dass SAT und seine eingeschränkte Version k-SAT nicht effizient zu lösen sind. Trotzdem können moderne SAT solver sogar riesige Echtweltinstanzen dieser Probleme in angemessener Zeit lösen. Warum ist SAT theoretisch schwer, aber einfach in der Praxis? Ein Ansatz um diese Frage zu beantworten ist die Untersuchung der durchschnittlichen Laufzeit von SAT. Um diese durchschnittliche oder typische Laufzeit analysieren zu können, wurde zufälliges k-SAT eingeführt. Dieses Modell erzeugt all k-SAT-Instanzen mit n Variablen und m Klauseln mit gleicher Wahrscheinlichkeit. Die Untersuchung des Zufallsmodells für k-SAT führte zu einer Vielzahl von Erkenntnissen und Techniken zur Untersuchung zufälliger Strukturen im Allgemeinen. Eine der größten Entdeckungen in diesem Zusammenhang war das Auftreten des sogenannten Erfüllbarkeitsschwellwerts: Ein Phasenübergang in der Anzahl der Klauseln, an dem die generierten Formeln von asymptotisch sicher erfüllbar zu asymptotisch sicher unerfüllbar wechseln. Zusätzlich scheinen Instanzen, die um diesen Übergang herum erzeugt werden, besonders schwer zu lösen zu sein. In dieser Arbeit analysieren wir ein allgemeineres Zufallsmodell für k-SAT, das wir nichtuniformes zufälliges k-SAT nennen. Im Gegensatz zum klassischen Modell, hat jede Boolesche Variable jetzt eine bestimmte Wahrscheinlichkeit gezogen zu werden. Für jede der m Klauseln ziehen wir k Variablen entsprechend ihrer Wahrscheinlichkeitsverteilung und wählen ihre Vorzeichen uniform zufällig. Nichtuniformes zufälliges k-SAT gibt uns mehr Kontrolle über die Verteilung Boolescher Variablen in den resultierenden Formeln. Das erlaubt uns diese Verteilungen auf die in der Praxis beobachteten zuzuschneiden. Insbesondere enthält nichtuniformes zufälliges k-SAT die zuvor vorgestellten Modelle zufälliges k-SAT, skalenfreies zufälliges k-SAT und geometrisches zufälliges k-SAT als Spezialfälle. Wir analysieren den Erfüllbarkeitsschwellwert in nichtuniformem zufälligen k-SAT abhängig von den Wahrscheinlichkeitsverteilungen für Variablen. Unser Ziel ist es, Bedingungen an diese Verteilungen abzuleiten, unter denen ein Äquivalent der Erfüllbarkeitsschwellwertsvermutung für zufälliges k-SAT gilt. Wir fangen mit dem wahrscheinlich einfacheren Modell nichtuniformem zufälligen 2-SAT an. Für dieses Modell zeigen wir, unter welchen Bedingungen ein Schwellwert existiert, ob er steil oder flach ansteigt und was die führende Konstante der Schwellwertfunktion ist. Das sind genau die Zutaten, die man benötigt um die Erfüllbarkeitsschwellwertsvermutung zu bestätigen oder zu widerlegen. Für nichtuniformes zufälliges k-SAT mit k≥3 zeigen wir nur hinreichende Bedingungen, unter denen ein Schwellwert existiert. Wir zeigen außerdem einige Eigenschaften der Variablenwahrscheinlichkeiten, die dazu führen, dass der Schwellwert steil ansteigt. Dies sind unseres Wissens nach die ersten allgemeinen Resultate zum Schwellwertverhalten von nichtuniformem zufälligen k-SAT. KW - Boolean satisfiability KW - random k-SAT KW - satisfiability threshold KW - non-uniform distribution KW - Boolsche Erfüllbarkeit KW - nicht-uniforme Verteilung KW - zufälliges k-SAT KW - Erfüllbarkeitsschwellwert Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-549702 ER - TY - JOUR A1 - Rosin, Paul L. A1 - Lai, Yu-Kun A1 - Mould, David A1 - Yi, Ran A1 - Berger, Itamar A1 - Doyle, Lars A1 - Lee, Seungyong A1 - Li, Chuan A1 - Liu, Yong-Jin A1 - Semmo, Amir A1 - Shamir, Ariel A1 - Son, Minjung A1 - Winnemöller, Holger T1 - NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering of portraits JF - Computational visual media N2 - Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset. KW - non-photorealistic rendering (NPR) KW - image stylization KW - style transfer KW - portrait KW - evaluation KW - benchmark Y1 - 2022 U6 - https://doi.org/10.1007/s41095-021-0255-3 SN - 2096-0433 SN - 2096-0662 VL - 8 IS - 3 SP - 445 EP - 465 PB - Springer Nature CY - London ER - TY - JOUR A1 - Ring, Raphaela M. A1 - Eisenmann, Clemens A1 - Kandil, Farid A1 - Steckhan, Nico A1 - Demmrich, Sarah A1 - Klatte, Caroline A1 - Kessler, Christian S. A1 - Jeitler, Michael A1 - Boschmann, Michael A1 - Michalsen, Andreas A1 - Blakeslee, Sarah B. A1 - Stöckigt, Barbara A1 - Stritter, Wiebke A1 - Koppold-Liebscher, Daniela A. T1 - Mental and behavioural responses to Bahá’í fasting: Looking behind the scenes of a religiously motivated intermittent fast using a mixed methods approach JF - Nutrients N2 - Background/Objective: Historically, fasting has been practiced not only for medical but also for religious reasons. Baha'is follow an annual religious intermittent dry fast of 19 days. We inquired into motivation behind and subjective health impacts of Baha'i fasting. Methods: A convergent parallel mixed methods design was embedded in a clinical single arm observational study. Semi-structured individual interviews were conducted before (n = 7), during (n = 8), and after fasting (n = 8). Three months after the fasting period, two focus group interviews were conducted (n = 5/n = 3). A total of 146 Baha'i volunteers answered an online survey at five time points before, during, and after fasting. Results: Fasting was found to play a central role for the religiosity of interviewees, implying changes in daily structures, spending time alone, engaging in religious practices, and experiencing social belonging. Results show an increase in mindfulness and well-being, which were accompanied by behavioural changes and experiences of self-efficacy and inner freedom. Survey scores point to an increase in mindfulness and well-being during fasting, while stress, anxiety, and fatigue decreased. Mindfulness remained elevated even three months after the fast. Conclusion: Baha'i fasting seems to enhance participants' mindfulness and well-being, lowering stress levels and reducing fatigue. Some of these effects lasted more than three months after fasting. KW - intermittent food restriction KW - mindfulness KW - self-efficacy KW - well-being KW - mixed methods KW - health behaviour KW - coping ability KW - religiously motivated KW - dry fasting Y1 - 2022 U6 - https://doi.org/10.3390/nu14051038 SN - 2072-6643 VL - 14 IS - 5 PB - MDPI CY - Basel ER - TY - JOUR A1 - Richly, Keven A1 - Schlosser, Rainer A1 - Boissier, Martin T1 - Budget-conscious fine-grained configuration optimization for spatio-temporal applications JF - Proceedings of the VLDB Endowment N2 - Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics. KW - General Earth and Planetary Sciences KW - Water Science and Technology KW - Geography, Planning and Development Y1 - 2022 U6 - https://doi.org/10.14778/3565838.3565858 SN - 2150-8097 VL - 15 IS - 13 SP - 4079 EP - 4092 PB - Association for Computing Machinery (ACM) CY - [New York] ER - TY - THES A1 - Repke, Tim T1 - Machine-learning-assisted corpus exploration and visualisation N2 - Text collections, such as corpora of books, research articles, news, or business documents are an important resource for knowledge discovery. Exploring large document collections by hand is a cumbersome but necessary task to gain new insights and find relevant information. Our digitised society allows us to utilise algorithms to support the information seeking process, for example with the help of retrieval or recommender systems. However, these systems only provide selective views of the data and require some prior knowledge to issue meaningful queries and asses a system’s response. The advancements of machine learning allow us to reduce this gap and better assist the information seeking process. For example, instead of sighting countless business documents by hand, journalists and investigator scan employ natural language processing techniques, such as named entity recognition. Al-though this greatly improves the capabilities of a data exploration platform, the wealth of information is still overwhelming. An overview of the entirety of a dataset in the form of a two-dimensional map-like visualisation may help to circumvent this issue. Such overviews enable novel interaction paradigms for users, which are similar to the exploration of digital geographical maps. In particular, they can provide valuable context by indicating how apiece of information fits into the bigger picture.This thesis proposes algorithms that appropriately pre-process heterogeneous documents and compute the layout for datasets of all kinds. Traditionally, given high-dimensional semantic representations of the data, so-called dimensionality reduction algorithms are usedto compute a layout of the data on a two-dimensional canvas. In this thesis, we focus on text corpora and go beyond only projecting the inherent semantic structure itself. Therefore,we propose three dimensionality reduction approaches that incorporate additional information into the layout process: (1) a multi-objective dimensionality reduction algorithm to jointly visualise semantic information with inherent network information derived from the underlying data; (2) a comparison of initialisation strategies for different dimensionality reduction algorithms to generate a series of layouts for corpora that grow and evolve overtime; (3) and an algorithm that updates existing layouts by incorporating user feedback provided by pointwise drag-and-drop edits. This thesis also contains system prototypes to demonstrate the proposed technologies, including pre-processing and layout of the data and presentation in interactive user interfaces. N2 - Der Großteil unseres Wissens steckt in Textsammlungen, wie etwa Korpora von Büchern, Forschungsartikeln, Nachrichten, sowie Geschäftsunterlagen. Sie bieten somit eine wertvolle Grundlage um neue Erkennisse zu gewinnen oder relevante Informationen zu finden, allerdings sind manuelle Recherchen aufgrund stetig wachsender Datenmengen schier unmöglich. Dank der Digitalisierung können Suchmaschinen Recherchen erheblich unterstützten. Sie bieten jedoch lediglich eine selektive Sicht auf die darunterliegenden Daten und erfordern ein gewisses Vorwissen um aussagekräftige Anfragen zu stellen und die Ergebnisse richtig einzuordnen. Die Fortschritte im Bereich des maschinellen Lernens eröffnen völlig neue Möglichkeiten zur Interaktion mit Daten. Anstatt zahllose Geschäftsdokumente von Hand zu sichten, können Journalisten und Ermittler beispielsweise Techniken aus der Computerlinguistik einsetzen um automatisch Personen oder Orte im Text erkennen. Ein daraus gebildeter sogenannter Knowledge Graph kann Suchmaschinen deutlich verbessern, allerdings ist die Fülle an Informationen weiterhin überwältigend. Eine Übersicht eines gesamten Datensatzes, ähnlich einer geographischen Landkarte, ermöglicht innovative Interaktionsparadigmen und ermöglicht es Nutzern zu erkennen, wie sich bestimmte Informationen in Kontext des Gesamtbilds einfügen. In dieser Arbeit werden Algorithmen entwickelt um heterogene Daten vorzuverarbeiten und sie auf zweidimensionalen kartenähnlichen Ansichten zu verorten. Traditionell werden zur Verortung hochdimensionale semantische Vektorrepräsentationen der Daten verwendet, die anschließend mit Dimensionsreduktionsalgorithmen auf eine zweidimensionale Ebene projiziert werden. Wir fokussieren uns auf die Visualisierung von Textkorpora und gehen dabei über die Projektion der reinen inhärenten semantischen Struktur hinaus. Hierzu wurden drei Ansätze zur Dimensionsreduktion entwickelt, die zusätzliche Informationen bei der Berechnung der Positionen einbeziehen: (1) Dimensionsreduktion mit mehren Kriterien, bei der sowohl semantische Informationen, als auch inhärente Netzwerkinformationen, die aus den zugrundeliegenden Daten abgeleitet werden, zur Positionsberechnung verwendet werden; (2) Analyse des Einflusses von Initialisierungsstrategien für verschiedene Dimensionsreduktionsalgorithmen, um eine zeitlich kohärente Serie an Projektionen zu erzeugen um Korpora abzubilden, welche im Laufe der Zeit wachsen; (3) Anpassung bereits vorhandener Projektionen auf der Basis einzelner, händisch verschobener Datenpunkte. Diese Arbeit beschreibt darüber hinaus Prototypen für Benutzeroberflächen, die zur Demonstration der beschriebenen Technologien entwickelt wurden. KW - dimensionality reduction KW - corpus exploration KW - data visualisation KW - Korpusexploration KW - Datenvisualisierung KW - Dimensionsreduktion Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-562636 ER -