Refine
Year of publication
- 2024 (343) (remove)
Document Type
- Doctoral Thesis (112)
- Article (98)
- Part of a Book (35)
- Monograph/Edited Volume (31)
- Other (19)
- Working Paper (12)
- Conference Proceeding (9)
- Master's Thesis (8)
- Part of Periodical (7)
- Report (3)
Keywords
- Brandenburg (5)
- Germany (5)
- Judentum (5)
- Kommune (5)
- Arctic (4)
- Arktis (4)
- Deutschland (4)
- Kommunalwissenschaft (4)
- digital transformation (4)
- experiment (4)
Institute
- Extern (37)
- Fachgruppe Politik- & Verwaltungswissenschaft (27)
- Fachgruppe Betriebswirtschaftslehre (24)
- Öffentliches Recht (24)
- Bürgerliches Recht (20)
- Fachgruppe Soziologie (19)
- Fachgruppe Volkswirtschaftslehre (18)
- Historisches Institut (17)
- Institut für Biochemie und Biologie (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (15)
Examining the dissemination of evidence on social media, we analyzed the discourse around eight visible scientists in the context of COVID-19. Using manual (N = 1,406) and automated coding (N = 42,640) on an account-based tracked Twitter/X dataset capturing scientists’ activities and eliciting reactions over six 2-week periods, we found that visible scientists’ tweets included more scientific evidence. However, public reactions contained more anecdotal evidence. Findings indicate that evidence can be a message characteristic leading to greater tweet dissemination. Implications for scientists, including explicitly incorporating scientific evidence in their communication and examining evidence in science communication research, are discussed.
In this essay I argue that while research in Jewish studies over the last several decades has done much to erode the historical narrative of Jewish/non-Jewish separation and detachment, it has also raised various questions pertaining to the outcome of Jewish/non-Jewish interactions and coexistence as well as the contours of Jewish difference. I contend that employing the concepts of conviviality, ethnic/religious/national indifference, and similarity will greatly facilitate answering these questions.
Habsburg Central Europe
(2024)
Central Europe is characterized by linguistic and cultural density as well as by endogenous and exogenous cultural influences. These constellations were especially visible in the former Habsburg Empire, where they influenced the formation of individual and collective identities. This led not only to continual crises and conflicts, but also to an equally enormous creative potential as became apparent in the culture of the fin-de-siècle.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
§ 70 Hohe See
(2024)
Im Kontext der zunehmenden Relevanz des Umgangs mit Digitalität im schulischen Unterricht und der daraus resultierenden Popularität von Gaming und Gamification als Lehrmethoden ist das Ziel dieser Arbeit, Game Design als konstruktivistische Herangehensweise an Computerspiele zu untersuchen. Genauer geht es darum, diese Methode hinsichtlich der Tauglichkeit für den Kunstunterricht zu analysieren. Dazu wird darauf eingegangen, inwiefern Game Design als Instruktionsmethode generell Lernen fördert bzw. zur Ausbildung einer Digital Literacy geeignet ist. Der Schwerpunkt liegt darin, Game Design im Hinblick auf die zentralen Kompetenz- und Lerndimensionen des Kunstunterrichts zu beleuchten. Genauer sind damit die künstlerische Produktion und die ästhetische Rezeption als die beiden maßgeblichen künstlerisch-ästhetischen Handlungskompetenzen gemeint sowie die ästhetische Erfahrung als besonderes Lernerlebnis, welches im kunstpädagogischen Diskurs neben den beschriebenen Kompetenzen als höchstes Ziel der Lehre gilt. Ebendiese drei Dimensionen funktionieren hierbei als Analyseebenen der untersuchten Methode. Game Design stellt sich dabei als weitestgehend förderlich für alle drei benannten Bereiche heraus, wobei es in Bezug auf die sinnliche Wahrnehmung im Prozess der ästhetischen Rezeption nur eine ergänzende Funktion annimmt. Es werden nicht alle Bereiche der Gestaltungsfelder der künstlerischen Produktion angesprochen. Ein experimentell-offenes künstlerisches Arbeiten wird ebenso nicht zwangsläufig ermöglicht. Jedoch werden alle anderen Bestandteile dieser Kompetenzdimensionen angesprochen und insbesondere die ästhetische Erfahrung vollumfänglich gefördert. Die digitale Spielentwicklung lässt sich somit aus kunstpädagogischer Perspektive für den Einsatz im Kunstunterricht legitimieren. Mit Ausblick auf STEAM Education und einen projektorientierten Unterricht ist sie sogar zu empfehlen.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.