Refine
Year of publication
- 2022 (51) (remove)
Document Type
- Article (18)
- Doctoral Thesis (14)
- Monograph/Edited Volume (12)
- Postprint (6)
- Report (1)
Is part of the Bibliography
- yes (51)
Keywords
- machine learning (4)
- Cloud Computing (2)
- MOOC (2)
- Massive Open Online Course (MOOC) (2)
- X-ray (2)
- artificial intelligence for health (2)
- assignments (2)
- course design (2)
- digital enlightenment (2)
- digital learning platform (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- disorder recognition (2)
- e-learning (2)
- everyday life (2)
- federated learning (2)
- flexibility (2)
- learning path (2)
- lebenslanges Lernen (2)
- lifelong learning (2)
- maschinelles Lernen (2)
- modularization (2)
- multiple modalities (2)
- neurological disorders (2)
- openHPI (2)
- privacy and security (2)
- privacy attack (2)
- quality assessment (2)
- self-paced learning (2)
- speech (2)
- transparency (2)
- trustworthiness (2)
- voice (2)
- 3D Druck (1)
- 3D printing (1)
- AI Lab (1)
- Abschlussbericht (1)
- Access Control (1)
- Activity-oriented Optimization (1)
- Agile (1)
- Agilität (1)
- Aktivitäten (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Archivanalyse (1)
- Attributsicherung (1)
- Aufzählungsalgorithmen (1)
- Auswirkungen (1)
- Behavior change (1)
- Benchmark (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Bounded Backward Model Checking (1)
- Bounded Model Checking (1)
- Business Process Management (1)
- Causal inference (1)
- Causal structure learning (1)
- Chernoff-Hoeffding theorem (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Computing (1)
- Creative (1)
- Cryptography (1)
- Cybersecurity (1)
- Cybersicherheit (1)
- Data Profiling (1)
- Data profiling (1)
- Datenaufbereitung (1)
- Datenintegration (1)
- Datenqualität (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datenstromverarbeitung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvisualisierung (1)
- Denkweise (1)
- Design Thinking (1)
- Dimensionsreduktion (1)
- Dissertation (1)
- Dubletten (1)
- Duplikaterkennung (1)
- Dynamik (1)
- Effect measurement (1)
- Einbettungen (1)
- Enterprise File Synchronization and Share (1)
- Enumeration algorithm (1)
- Erfüllbarkeitsschwellwert (1)
- Erkennung von Metadaten (1)
- Fabrikation (1)
- Forschungsprojekte (1)
- Future SOC Lab (1)
- General Earth and Planetary Sciences (1)
- Geography, Planning and Development (1)
- GitHub (1)
- GraalVM (1)
- Großformat (1)
- Heuristiken (1)
- Hitting Sets (1)
- Holant problems (1)
- Ideation (1)
- Ideenfindung (1)
- Identität (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- In-Memory Technologie (1)
- Innovation (1)
- Interdisciplinary Teams (1)
- Java (1)
- KI-Labor (1)
- Kausalität (1)
- Konstruktion von Wissensgraphen (1)
- Korpusexploration (1)
- Kraft (1)
- Kreativität (1)
- Kryptografie (1)
- LC-MS (1)
- Live-Migration (1)
- Live-Programmierung (1)
- Lively Kernel (1)
- Log data (1)
- Lösungsraum (1)
- MS (1)
- Manufacturing (1)
- Maschinen (1)
- Measurement (1)
- Messung (1)
- Mindset (1)
- Minimal hitting set (1)
- Multicore Architekturen (1)
- Multidisciplinary Teams (1)
- Named-Entity-Erkennung (1)
- Online-Persönlichkeit (1)
- PTM (1)
- Performanz (1)
- PhD thesis (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Execution (1)
- Process discovery (1)
- Programmierwerkzeuge (1)
- Prozess (1)
- Regressionstests (1)
- Resource Allocation (1)
- Resource Management (1)
- Satz von Chernoff-Hoeffding (1)
- Savanne (1)
- Scrollytelling (1)
- Smalltalk (1)
- Solution Space (1)
- Statistical process mining (1)
- Strategic cognition (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Texterkennung (1)
- Tools (1)
- Tragfähigkeit (1)
- Trajektorien (1)
- Transversal hypergraph (1)
- Transversal-Hypergraph (1)
- Unique column combination (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Unterricht mit digitalen Medien (1)
- VUCA-World (1)
- Verhaltensänderung (1)
- Versionsverwaltung (1)
- Vertrauen (1)
- Vorhersage (1)
- W[3]-Completeness (1)
- Water Science and Technology (1)
- Werkzeuge (1)
- Wicked Problems (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wolke (1)
- Wüstenbildung (1)
- Zugriffskontrolle (1)
- agil (1)
- analog-to-digital conversion (1)
- approximate counting (1)
- archive analysis (1)
- artifical intelligence (1)
- attribute assurance (1)
- benchmark (1)
- benchmarking (1)
- benutzergenerierte Inhalte (1)
- bioinformatics tool (1)
- bounded backward model checking (1)
- bounded model checking (1)
- causality (1)
- cloud (1)
- cloud computing (1)
- computer vision (1)
- computing (1)
- continuous integration (1)
- corpus exploration (1)
- cyber-physical systems (1)
- cyber-physische Systeme (1)
- data integration (1)
- data preparation (1)
- data profiling (1)
- data quality (1)
- data set (1)
- data synthesis (1)
- data visualisation (1)
- data wrangling (1)
- deduplication (1)
- desertification (1)
- design thinking (1)
- digital picture archive (1)
- digital transformation (1)
- digital unterstützter Unterricht (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- dimensionality reduction (1)
- discrete-event model (1)
- diskretes Ereignismodell (1)
- domain-specific knowledge graphs (1)
- domänenspezifisches Wissensgraphen (1)
- dsps (1)
- duplicate detection (1)
- dynamic systems (1)
- dynamics (1)
- dynamische Systeme (1)
- eindeutige Spaltenkombination (1)
- embeddings (1)
- entity resolution (1)
- enumeration algorithms (1)
- espbench (1)
- fabrication (1)
- final report (1)
- force (1)
- forest number (1)
- genomics (1)
- graph (1)
- heuristics (1)
- hitting sets (1)
- human-centered (1)
- human-scale (1)
- identity (1)
- in-memory technology (1)
- industry (1)
- industry 4.0 (1)
- interactive media (1)
- interaktive Medien (1)
- interdisziplinäre Teams (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- k-inductive invariant checking (1)
- k-induktive Invariantenprüfung (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- knowledge graphs (1)
- kontinuierliche Integration (1)
- künstliche Intelligenz (1)
- label-free quantification (1)
- lebenszentriert (1)
- left recursion (1)
- life-centered (1)
- live migration (1)
- live programming (1)
- load-bearing (1)
- machines (1)
- male infertility (1)
- maschinelles Sehen (1)
- menschenzentriert (1)
- mental models (1)
- metabolomics (1)
- metadata detection (1)
- modification stoichiometry (1)
- multicore architectures (1)
- multidisziplinäre Teams (1)
- named entity recognition (1)
- nicht-uniforme Verteilung (1)
- non-uniform distribution (1)
- omics (1)
- online personality (1)
- optical character recognition (1)
- packrat parsing (1)
- parallel and sequential independence (1)
- parallele und Sequentielle Unabhängigkeit (1)
- parsing expression grammars (1)
- partition functions (1)
- performance (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polynomials (1)
- post-translational modification (1)
- prediction (1)
- probabilistic timed systems (1)
- probabilistische gezeitete Systeme (1)
- programming tools (1)
- proteomics (1)
- qualitative Analyse (1)
- qualitative analysis (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification (1)
- quantitative Analyse (1)
- quantitative analysis (1)
- rainbow connection (1)
- random k-SAT (1)
- regression testing (1)
- research projects (1)
- satisfiability threshold (1)
- savanna (1)
- schwach überwachtes maschinelles Lernen (1)
- scrollytelling (1)
- selbst-souveräne Identitäten (1)
- self-sovereign identity (1)
- semantic representations (1)
- semantische Repräsentationen (1)
- social media (1)
- social networking (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- stream processing (1)
- symbolic analysis (1)
- symbolische Analyse (1)
- test case prioritization (1)
- test results (1)
- trajectories (1)
- transcriptomics (1)
- transversal hypergraph (1)
- trust (1)
- unique column combinations (1)
- upper bound (1)
- user-generated content (1)
- version control (1)
- verzwickte Probleme (1)
- virtual (1)
- virtual machines (1)
- virtuell (1)
- virtuelle Maschinen (1)
- weak supervision (1)
- web-based development (1)
- webbasierte Entwicklung (1)
- zufälliges k-SAT (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (51) (remove)
The transversal hypergraph problem asks to enumerate the minimal hitting sets of a hypergraph. If the solutions have bounded size, Eiter and Gottlob [SICOMP'95] gave an algorithm running in output-polynomial time, but whose space requirement also scales with the output. We improve this to polynomial delay and space. Central to our approach is the extension problem, deciding for a set X of vertices whether it is contained in any minimal hitting set. We show that this is one of the first natural problems to be W[3]-complete. We give an algorithm for the extension problem running in time O(m(vertical bar X vertical bar+1) n) and prove a SETH-lower bound showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases.
In liquid-chromatography-tandem-mass-spectrometry-based proteomics, information about the presence and stoichiometry ofprotein modifications is not readily available. To overcome this problem,we developed multiFLEX-LF, a computational tool that builds uponFLEXIQuant, which detects modified peptide precursors and quantifiestheir modification extent by monitoring the differences between observedand expected intensities of the unmodified precursors. multiFLEX-LFrelies on robust linear regression to calculate the modification extent of agiven precursor relative to a within-study reference. multiFLEX-LF cananalyze entire label-free discovery proteomics data sets in a precursor-centric manner without preselecting a protein of interest. To analyzemodification dynamics and coregulated modifications, we hierarchicallyclustered the precursors of all proteins based on their computed relativemodification scores. We applied multiFLEX-LF to a data-independent-acquisition-based data set acquired using the anaphase-promoting complex/cyclosome (APC/C) isolated at various time pointsduring mitosis. The clustering of the precursors allows for identifying varying modification dynamics and ordering the modificationevents. Overall, multiFLEX-LF enables the fast identification of potentially differentially modified peptide precursors and thequantification of their differential modification extent in large data sets using a personal computer. Additionally, multiFLEX-LF candrive the large-scale investigation of the modification dynamics of peptide precursors in time-series and case-control studies.multiFLEX-LF is available athttps://gitlab.com/SteenOmicsLab/multiflex-lf.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
Omics and male infertility
(2022)
Male infertility is a multifaceted disorder affecting approximately 50% of male partners in infertile couples.
Over the years, male infertility has been diagnosed mainly through semen analysis, hormone evaluations, medical records and physical examinations, which of course are fundamental, but yet inefficient, because 30% of male infertility cases remain idiopathic. This dilemmatic status of the unknown needs to be addressed with more sophisticated and result-driven technologies and/or techniques.
Genetic alterations have been linked with male infertility, thereby unveiling the practicality of investigating this disorder from the "omics" perspective.
Omics aims at analyzing the structure and functions of a whole constituent of a given biological function at different levels, including the molecular gene level (genomics), transcript level (transcriptomics), protein level (proteomics) and metabolites level (metabolomics). In the current study, an overview of the four branches of omics and their roles in male infertility are briefly discussed; the potential usefulness of assessing transcriptomic data to understand this pathology is also elucidated.
After assessing the publicly obtainable transcriptomic data for datasets on male infertility, a total of 1385 datasets were retrieved, of which 10 datasets met the inclusion criteria and were used for further analysis.
These datasets were classified into groups according to the disease or cause of male infertility.
The groups include non-obstructive azoospermia (NOA), obstructive azoospermia (OA), non-obstructive and obstructive azoospermia (NOA and OA), spermatogenic dysfunction, sperm dysfunction, and Y chromosome microdeletion.
Findings revealed that 8 genes (LDHC, PDHA2, TNP1, TNP2, ODF1, ODF2, SPINK2, PCDHB3) were commonly differentially expressed between all disease groups.
Likewise, 56 genes were common between NOA versus NOA and OA (ADAD1, BANF2, BCL2L14, C12orf50, C20orf173, C22orf23, C6orf99, C9orf131, C9orf24, CABS1, CAPZA3, CCDC187, CCDC54, CDKN3, CEP170, CFAP206, CRISP2, CT83, CXorf65, FAM209A, FAM71F1, FAM81B, GALNTL5, GTSF1, H1FNT, HEMGN, HMGB4, KIF2B, LDHC, LOC441601, LYZL2, ODF1, ODF2, PCDHB3, PDHA2, PGK2, PIH1D2, PLCZ1, PROCA1, RIMBP3, ROPN1L, SHCBP1L, SMCP, SPATA16, SPATA19, SPINK2, TEX33, TKTL2, TMCO2, TMCO5A, TNP1, TNP2, TSPAN16, TSSK1B, TTLL2, UBQLN3).
These genes, particularly the above-mentioned 8 genes, are involved in diverse biological processes such as germ cell development, spermatid development, spermatid differentiation, regulation of proteolysis, spermatogenesis and metabolic processes.
Owing to the stage-specific expression of these genes, any mal-expression can ultimately lead to male infertility.
Therefore, currently available data on all branches of omics relating to male fertility can be used to identify biomarkers for diagnosing male infertility, which can potentially help in unravelling some idiopathic cases.
A path in an edge-colored graph is rainbow if no two edges of it are colored the same, and the graph is rainbow-connected if there is a rainbow path between each pair of its vertices. The minimum number of colors needed to rainbow-connect a graph G is the rainbow connection number of G, denoted by rc(G).& nbsp;A simple way to rainbow-connect a graph G is to color the edges of a spanning tree with distinct colors and then re-use any of these colors to color the remaining edges of G. This proves that rc(G) <= |V (G)|-1. We ask whether there is a stronger connection between tree-like structures and rainbow coloring than that is implied by the above trivial argument. For instance, is it possible to find an upper bound of t(G)-1 for rc(G), where t(G) is the number of vertices in the largest induced tree of G? The answer turns out to be negative, as there are counter-examples that show that even c .t(G) is not an upper bound for rc(G) for any given constant c.& nbsp;In this work we show that if we consider the forest number f(G), the number of vertices in a maximum induced forest of G, instead of t(G), then surprisingly we do get an upper bound. More specifically, we prove that rc(G) <= f(G) + 2. Our result indicates a stronger connection between rainbow connection and tree-like structures than that was suggested by the simple spanning tree based upper bound.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Process mining techniques are valuable to gain insights into and help improve (work) processes. Many of these techniques focus on the sequential order in which activities are performed. Few of these techniques consider the statistical relations within processes. In particular, existing techniques do not allow insights into how responses to an event (action) result in desired or undesired outcomes (effects). We propose and formalize the ARE miner, a novel technique that allows us to analyze and understand these action-response-effect patterns. We take a statistical approach to uncover potential dependency relations in these patterns. The goal of this research is to generate processes that are: (1) appropriately represented, and (2) effectively filtered to show meaningful relations. We evaluate the ARE miner in two ways. First, we use an artificial data set to demonstrate the effectiveness of the ARE miner compared to two traditional process-oriented approaches. Second, we apply the ARE miner to a real-world data set from a Dutch healthcare institution. We show that the ARE miner generates comprehensible representations that lead to informative insights into statistical relations between actions, responses, and effects.
ReadBouncer
(2022)
Motivation:
Nanopore sequencers allow targeted sequencing of interesting nucleotide sequences by rejecting other sequences from individual pores. This feature facilitates the enrichment of low-abundant sequences by depleting overrepresented ones in-silico. Existing tools for adaptive sampling either apply signal alignment, which cannot handle human-sized reference sequences, or apply read mapping in sequence space relying on fast graphical processing units (GPU) base callers for real-time read rejection. Using nanopore long-read mapping tools is also not optimal when mapping shorter reads as usually analyzed in adaptive sampling applications.
Results:
Here, we present a new approach for nanopore adaptive sampling that combines fast CPU and GPU base calling with read classification based on Interleaved Bloom Filters. ReadBouncer improves the potential enrichment of low abundance sequences by its high read classification sensitivity and specificity, outperforming existing tools in the field. It robustly removes even reads belonging to large reference sequences while running on commodity hardware without GPUs, making adaptive sampling accessible for in-field researchers. Readbouncer also provides a user-friendly interface and installer files for end-users without a bioinformatics background.
Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics.