004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (164) (remove)
Is part of the Bibliography
- yes (164) (remove)
Keywords
- Data profiling (3)
- machine learning (3)
- Blockchains (2)
- Deep learning (2)
- General Earth and Planetary Sciences (2)
- Geography, Planning and Development (2)
- JSP (2)
- Machine learning (2)
- Runtime analysis (2)
- Twitter (2)
- Water Science and Technology (2)
- answer set programming (2)
- bibliometric analysis (2)
- citation analysis (2)
- data integration (2)
- duplicate detection (2)
- identity theory (2)
- perception of robots (2)
- social media (2)
- software engineering (2)
- (FPGA) (1)
- 3D point clouds (1)
- APX-hardness (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Advanced Video Codec (AVC) (1)
- Algebraic methods (1)
- Animal building (1)
- Application (1)
- Argument Mining (1)
- Artificial neural networks (1)
- Assessment (1)
- Attribute aggregation (1)
- Augmented and virtual reality (1)
- Augmented reality (1)
- Authentication (1)
- Autismus (1)
- Automatically controlled windows (1)
- BPMN (1)
- Bean (1)
- Bibliometrics (1)
- Bidirectional order dependencies (1)
- Big Data (1)
- Big Five model (1)
- Bitcoin (1)
- Brownian motion with discontinuous drift (1)
- Business Process Management (1)
- Business process modeling (1)
- Bystander (1)
- CCS Concepts (1)
- Calibration (1)
- Canvas (1)
- Case management (1)
- Clinical predictive modeling (1)
- Cographs (1)
- Coherent partition (1)
- Commonsense reasoning (1)
- Complexity (1)
- Compliance checking (1)
- Computational photography (1)
- Computer crime (1)
- Computergestützes Training (1)
- Conceptual modeling (1)
- Condition number (1)
- Consistency (1)
- Convolution (1)
- Covid (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Customer ownership (1)
- Data dependencies (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-centric (1)
- Decision support (1)
- Defining characteristics of physical computing (1)
- Delphi study (1)
- Delta preservation (1)
- Dependency discovery (1)
- Digital Game Based Learning (1)
- Digital image analysis (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Distributed computing (1)
- Distributed programming (1)
- Economics (1)
- Ecosystems (1)
- Electronic and spintronic devices (1)
- Entity resolution (1)
- Enumeration algorithm (1)
- Estimation-of-distribution algorithm (1)
- Evolutionary algorithms (1)
- FPGA (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fitness-distance correlation (1)
- Formal modelling (1)
- Functional dependencies (1)
- Gene expression (1)
- Geschäftsmodell (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- H.264 (1)
- Hardware accelerator (1)
- Helmholtz problem (1)
- HiGHmed (1)
- Histograms (1)
- Human (1)
- Human-robot interaction (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Imperative calculi (1)
- Improving classroom (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Industries (1)
- Inference (1)
- Informatikstudium (1)
- Initial conflicts (1)
- Instagram (1)
- Insurance industry (1)
- Interface design (1)
- Interpretability (1)
- Kernel (1)
- Kompetenzentwicklung (1)
- LIDAR (1)
- LSTM (1)
- Licenses (1)
- Lindenmayer systems (1)
- Loss (1)
- Low Latency (1)
- Machine Learning (1)
- Marketing (1)
- Matroids (1)
- Media in education (1)
- Metaverse (1)
- Minimal hitting set (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multimodal behavior (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NUI (1)
- Natural ventilation (1)
- Nephrology (1)
- Nested graph conditions (1)
- Network clustering (1)
- O (1)
- Onlinekurse (1)
- Opinion mining (1)
- Optimization (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Parallelization (1)
- Pedagogical issues (1)
- Plant identification (1)
- Point-based rendering (1)
- Popular matching (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy (1)
- Process Execution (1)
- Protocols (1)
- Query execution (1)
- Query optimization (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Region of Interest (1)
- Relational data (1)
- Reproducible benchmarking (1)
- Resource Allocation (1)
- Resource Management (1)
- Reversibility (1)
- Robot personality (1)
- Run time analysis (1)
- SCED (1)
- SQL (1)
- Satisfiability (1)
- Scale-invariant feature transform (SIFT) (1)
- Second Life (1)
- Security (1)
- Semantic Web (1)
- Semiconductors (1)
- Sequential anomaly (1)
- Sharing (1)
- Signal processing (1)
- Simulations (1)
- Single event upsets (1)
- Smart cities (1)
- Social (1)
- Specification (1)
- Stable marriage (1)
- Stable matching (1)
- Stance Detection (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studiendauer (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Systematics (1)
- Systems of parallel communicating (1)
- Taxonomy (1)
- Text mining (1)
- Theory (1)
- Time series (1)
- Transversal hypergraph (1)
- Type and effect systems (1)
- UX (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- User Experience (1)
- VGG16 (1)
- VR (1)
- Validation (1)
- Value network (1)
- Visualization (1)
- Vocabulary (1)
- W[3]-Completeness (1)
- Werbung (1)
- WhatsApp (1)
- X-ray imaging (1)
- YouTube (1)
- Zebris (1)
- action and change (1)
- acyclic preferences (1)
- adaptive (1)
- algorithms (1)
- annotation (1)
- anxiety (1)
- app (1)
- approximation (1)
- architecture (1)
- architecture recovery (1)
- argumentation research (1)
- attacks (1)
- attribute assurance (1)
- authorship attribution (1)
- automata (1)
- automated planning (1)
- betriebliche Weiterbildungspraxis (1)
- big data (1)
- biomarker detection (1)
- blockchain (1)
- brand personality (1)
- business process management (1)
- business processes (1)
- cancer therapy (1)
- center dot Computing (1)
- cloud security (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- coding and information theory (1)
- cognition (1)
- cognitive load (1)
- collaboration (1)
- combined task and motion planning (1)
- competence development (1)
- complexity dichotomy (1)
- comprehension (1)
- computational thinking (1)
- computed tomography (1)
- computer science (1)
- computer vision (1)
- concurrent graph rewriting (1)
- conditions (1)
- conflicts and dependencies in (1)
- contract (1)
- conversational agents (1)
- corporate nomadism (1)
- corporate takeovers (1)
- cryptocurrency exchanges (1)
- cryptology (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyberbullying (1)
- cyberwar (1)
- data analytics (1)
- data assimilation (1)
- data driven approaches (1)
- data migration (1)
- data pipeline (1)
- data preparation (1)
- data quality (1)
- data requirements (1)
- data structures and information theory (1)
- data transformation (1)
- data wrangling (1)
- database systems (1)
- deferred choice (1)
- dental caries classification (1)
- depression (1)
- determinism (1)
- deterministic properties (1)
- developmental systems (1)
- digital health (1)
- digital identity (1)
- digital interventions (1)
- digital nomadism (1)
- digital transformation (1)
- digital workplace transformation (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- distributed systems (1)
- drift theory (1)
- educational systems (1)
- efficient deep learning (1)
- elections (1)
- emotional design (1)
- empathy (1)
- engagement (1)
- engine (1)
- engineering (1)
- ethics (1)
- evaluation (1)
- evolutionary computation (1)
- exact simulation methods (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- expression (1)
- external knowledge bases (1)
- failure model (1)
- field-programmable gate array (1)
- forensics (1)
- formal languages (1)
- formal semantics (1)
- functions (1)
- gait analysis algorithm (1)
- gender (1)
- gene (1)
- gene selection (1)
- general (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- graph languages (1)
- graph pattern matching (1)
- graph transformation (1)
- hardware accelerator (1)
- hardware architecture (1)
- health care (1)
- healthcare (1)
- home office (1)
- human–computer interaction (1)
- identity broker (1)
- image captioning (1)
- image processing (1)
- individual effects (1)
- inertial measurement unit (1)
- informal and formal learning (1)
- information diffusion (1)
- interactive technologies (1)
- international human rights (1)
- international humanitarian law (1)
- interpretable machine learning (1)
- intransitivity (1)
- iteration method (1)
- job shop scheduling (1)
- job-shop scheduling (1)
- key competences in physical computing (1)
- knowledge building (1)
- knowledge management (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge work (1)
- labour union education (1)
- law (1)
- law and technology (1)
- learning (1)
- learning factory (1)
- literature review (1)
- localization (1)
- long-term interaction (1)
- longitudinal (1)
- machine (1)
- machine learning algorithms (1)
- manipulation planning (1)
- matrices (1)
- media (1)
- medical malpractice (1)
- memory (1)
- method comparision (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- migration (1)
- mobile application (1)
- mobile learning (1)
- mobile technologies and apps (1)
- modelling (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- mood (1)
- multimedia learning (1)
- multimodal representations (1)
- mutli-task learning (1)
- mutual gaze (1)
- networks (1)
- neural (1)
- neural networks (1)
- new technologies (1)
- news media (1)
- notation (1)
- online learning (1)
- optimal transport (1)
- oracles (1)
- organisational evolution (1)
- paper prototyping (1)
- parallel processing (1)
- parallel rewriting (1)
- performance (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- phone (1)
- physical computing tools (1)
- planning (1)
- poset (1)
- power-law (1)
- predictive models (1)
- prior knowledge (1)
- process scheduling (1)
- processes (1)
- processing (1)
- production planning and control (1)
- program (1)
- programming (1)
- programming skills (1)
- public dataset (1)
- quantified logics (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- real-time (1)
- record linkage (1)
- recursive tuning (1)
- reliability (1)
- remodularization (1)
- remote-first (1)
- representation learning (1)
- resilient architectures (1)
- restoration (1)
- restricted parallelism (1)
- review (1)
- robustness (1)
- search plan generation (1)
- security (1)
- security chaos engineering (1)
- security risk assessment (1)
- self-adaptive multiprocessing system (1)
- self-driving (1)
- self-healing (1)
- self-sovereign identity (1)
- self-supervised learning (1)
- sentiment (1)
- signal processing (1)
- similarity learning (1)
- similarity measures (1)
- simulation (1)
- single event upset (1)
- single-case experimental design (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- smart contracts (1)
- smoother (1)
- social media analysis (1)
- social network analysis (1)
- social networking sites (1)
- societal effects (1)
- software selection (1)
- solar particle event (1)
- space missions (1)
- spread correction (1)
- stable matching (1)
- standardization (1)
- stochastic process (1)
- strongly stable matching (1)
- super stable matching (1)
- survey mode (1)
- systematic literature review (1)
- taxonomy (1)
- teaching (1)
- teamwork (1)
- technical notes and rapid communications (1)
- terminology (1)
- text based classification methods (1)
- tort law (1)
- training (1)
- transfer learning (1)
- trust (1)
- trust model (1)
- uncanny valley (1)
- unsupervised methods (1)
- usability (1)
- user experience (1)
- virtual collaboration (1)
- virtual groups (1)
- virtual reality (1)
- virtual teams (1)
- vocational training (1)
- vulnerabilities (1)
- weakly (1)
- web application (1)
- weight (1)
- well-being (1)
- workflow patterns (1)
- workload prediction (1)
Institute
- Institut für Informatik und Computational Science (47)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (41)
- Hasso-Plattner-Institut für Digital Engineering GmbH (19)
- Fachgruppe Betriebswirtschaftslehre (12)
- Bürgerliches Recht (10)
- Wirtschaftswissenschaften (8)
- Institut für Mathematik (6)
- Institut für Biochemie und Biologie (4)
- Institut für Physik und Astronomie (4)
- Department Erziehungswissenschaft (3)
Geocoder accuracy ranking
(2014)
Finding an address on a map is sometimes tricky: the chosen map application may be unfamiliar with the enclosed region. There are several geocoders on the market, they have different databases and algorithms to compute the query. Consequently, the geocoding results differ in their quality. Fortunately the geocoders provide a rich set of metadata. The workflow described in this paper compares this metadata with the aim to find out which geocoder is offering the best-fitting coordinate for a given address.
Geometric generalization is a fundamental concept in the digital mapping process. An increasing amount of spatial data is provided on the web as well as a range of tools to process it. This jABC workflow is used for the automatic testing of web-based generalization services like mapshaper.org by executing its functionality, overlaying both datasets before and after the transformation and displaying them visually in a .tif file. Mostly Web Services and command line tools are used to build an environment where ESRI shapefiles can be uploaded, processed through a chosen generalization service and finally visualized in Irfanview.
In the geoinformatics field, remote sensing data is often used for analyzing the characteristics of the current investigation area. This includes DEMs, which are simple raster grids containing grey scales representing the respective elevation values. The project CREADED that is presented in this paper aims at making these monochrome raster images more significant and more intuitively interpretable. For this purpose, an executable interactive model for creating a colored and relief-shaded Digital Elevation Model (DEM) has been designed using the jABC framework. The process is based on standard jABC-SIBs and SIBs that provide specific GIS functions, which are available as Web services, command line tools and scripts.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
Creation of topographic maps
(2014)
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
GraffDok is an application helping to maintain an overview over sprayed images somewhere in a city. At the time of writing it aims at vandalism rather than at beautiful photographic graffiti in an underpass. Looking at hundreds of tags and scribbles on monuments, house walls, etc. it would be interesting to not only record them in writing but even make them accessible electronically, including images.
GraffDok’s workflow is simple and only requires an EXIF-GPS-tagged photograph of a graffito. It automatically determines its location by using reverse geocoding with the given GPS-coordinates and the Gisgraphy WebService. While asking the user for some more meta data, GraffDok analyses the image in parallel with this and tries to detect fore- and background – before extracting the drawing lines and make them stand alone. The command line based tool ImageMagick is used here as well as for accessing EXIF data.
Any meta data is written to csv-files, which will stay easily accessible and can be integrated in TeX-files as well. The latter ones are converted to PDF at the end of the workflow, containing a table about all graffiti and a summary for each – including the generated characteristic graffiti pattern image.
The protein classification workflow described in this report enables users to get information about a novel protein sequence automatically. The information is derived by different bioinformatic analysis tools which calculate or predict features of a protein sequence. Also, databases are used to compare the novel sequence with known proteins.
Analyses of metagenomes in life sciences present new opportunities as well as challenges to the scientific community and call for advanced computational methods and workflows. The large amount of data collected from samples via next-generation sequencing (NGS) technologies render manual approaches to sequence comparison and annotation unsuitable. Rather, fast and efficient computational pipelines are needed to provide comprehensive statistics and summaries and enable the researcher to choose appropriate tools for more specific analyses. The workflow presented here builds upon previous pipelines designed for automated clustering and annotation of raw sequence reads obtained from next-generation sequencing technologies such as 454 and Illumina. Employing specialized algorithms, the sequence reads are processed at three different levels. First, raw reads are clustered at high similarity cutoff to yield clusters which can be exported as multifasta files for further analyses. Independently, open reading frames (ORFs) are predicted from raw reads and clustered at two strictness levels to yield sets of non-redundant sequences and ORF families. Furthermore, single ORFs are annotated by performing searches against the Pfam database
Exploratory Data Analysis
(2014)
In bioinformatics the term exploratory data analysis refers to different methods to get an overview of large biological data sets. Hence, it helps to create a framework for further analysis and hypothesis testing. The workflow facilitates this first important step of the data analysis created by high-throughput technologies. The results are different plots showing the structure of the measurements. The goal of the workflow is the automatization of the exploratory data analysis, but also the flexibility should be guaranteed. The basic tool is the free software R.