004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (163) (remove)
Is part of the Bibliography
- yes (163) (remove)
Keywords
- Data profiling (3)
- machine learning (3)
- Blockchains (2)
- Deep learning (2)
- General Earth and Planetary Sciences (2)
- Geography, Planning and Development (2)
- JSP (2)
- Machine learning (2)
- Runtime analysis (2)
- Twitter (2)
- Water Science and Technology (2)
- answer set programming (2)
- bibliometric analysis (2)
- citation analysis (2)
- data integration (2)
- duplicate detection (2)
- identity theory (2)
- perception of robots (2)
- social media (2)
- software engineering (2)
- (FPGA) (1)
- 3D point clouds (1)
- APX-hardness (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Advanced Video Codec (AVC) (1)
- Algebraic methods (1)
- Animal building (1)
- Application (1)
- Argument Mining (1)
- Artificial neural networks (1)
- Assessment (1)
- Attribute aggregation (1)
- Augmented and virtual reality (1)
- Augmented reality (1)
- Authentication (1)
- Autismus (1)
- Automatically controlled windows (1)
- BPMN (1)
- Bean (1)
- Bibliometrics (1)
- Bidirectional order dependencies (1)
- Big Data (1)
- Big Five model (1)
- Bitcoin (1)
- Brownian motion with discontinuous drift (1)
- Business Process Management (1)
- Business process modeling (1)
- Bystander (1)
- CCS Concepts (1)
- Calibration (1)
- Canvas (1)
- Case management (1)
- Clinical predictive modeling (1)
- Cographs (1)
- Coherent partition (1)
- Commonsense reasoning (1)
- Complexity (1)
- Compliance checking (1)
- Computational photography (1)
- Computer crime (1)
- Computergestützes Training (1)
- Conceptual modeling (1)
- Condition number (1)
- Consistency (1)
- Convolution (1)
- Covid (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Customer ownership (1)
- Data dependencies (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-centric (1)
- Decision support (1)
- Defining characteristics of physical computing (1)
- Delphi study (1)
- Delta preservation (1)
- Dependency discovery (1)
- Digital Game Based Learning (1)
- Digital image analysis (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Distributed computing (1)
- Distributed programming (1)
- Economics (1)
- Ecosystems (1)
- Electronic and spintronic devices (1)
- Entity resolution (1)
- Enumeration algorithm (1)
- Estimation-of-distribution algorithm (1)
- Evolutionary algorithms (1)
- FPGA (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fitness-distance correlation (1)
- Formal modelling (1)
- Functional dependencies (1)
- Gene expression (1)
- Geschäftsmodell (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- H.264 (1)
- Hardware accelerator (1)
- Helmholtz problem (1)
- HiGHmed (1)
- Histograms (1)
- Human (1)
- Human-robot interaction (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Imperative calculi (1)
- Improving classroom (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Industries (1)
- Inference (1)
- Informatikstudium (1)
- Initial conflicts (1)
- Instagram (1)
- Insurance industry (1)
- Interface design (1)
- Interpretability (1)
- Kernel (1)
- Kompetenzentwicklung (1)
- LIDAR (1)
- LSTM (1)
- Licenses (1)
- Lindenmayer systems (1)
- Loss (1)
- Low Latency (1)
- Machine Learning (1)
- Marketing (1)
- Matroids (1)
- Media in education (1)
- Metaverse (1)
- Minimal hitting set (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multimodal behavior (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NUI (1)
- Natural ventilation (1)
- Nephrology (1)
- Nested graph conditions (1)
- Network clustering (1)
- O (1)
- Onlinekurse (1)
- Opinion mining (1)
- Optimization (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Parallelization (1)
- Pedagogical issues (1)
- Plant identification (1)
- Point-based rendering (1)
- Popular matching (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy (1)
- Process Execution (1)
- Protocols (1)
- Query execution (1)
- Query optimization (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Region of Interest (1)
- Relational data (1)
- Reproducible benchmarking (1)
- Resource Allocation (1)
- Resource Management (1)
- Reversibility (1)
- Robot personality (1)
- Run time analysis (1)
- SCED (1)
- SQL (1)
- Satisfiability (1)
- Scale-invariant feature transform (SIFT) (1)
- Second Life (1)
- Security (1)
- Semantic Web (1)
- Semiconductors (1)
- Sequential anomaly (1)
- Sharing (1)
- Signal processing (1)
- Simulations (1)
- Single event upsets (1)
- Smart cities (1)
- Social (1)
- Specification (1)
- Stable marriage (1)
- Stable matching (1)
- Stance Detection (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studiendauer (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Systematics (1)
- Systems of parallel communicating (1)
- Taxonomy (1)
- Text mining (1)
- Theory (1)
- Time series (1)
- Transversal hypergraph (1)
- Type and effect systems (1)
- UX (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- User Experience (1)
- VGG16 (1)
- VR (1)
- Validation (1)
- Value network (1)
- Visualization (1)
- Vocabulary (1)
- W[3]-Completeness (1)
- Werbung (1)
- WhatsApp (1)
- X-ray imaging (1)
- YouTube (1)
- Zebris (1)
- action and change (1)
- acyclic preferences (1)
- adaptive (1)
- algorithms (1)
- annotation (1)
- anxiety (1)
- app (1)
- approximation (1)
- architecture (1)
- architecture recovery (1)
- argumentation research (1)
- attacks (1)
- attribute assurance (1)
- authorship attribution (1)
- automata (1)
- automated planning (1)
- betriebliche Weiterbildungspraxis (1)
- big data (1)
- biomarker detection (1)
- blockchain (1)
- brand personality (1)
- business process management (1)
- business processes (1)
- cancer therapy (1)
- center dot Computing (1)
- cloud security (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- coding and information theory (1)
- cognition (1)
- cognitive load (1)
- collaboration (1)
- combined task and motion planning (1)
- competence development (1)
- complexity dichotomy (1)
- comprehension (1)
- computational thinking (1)
- computed tomography (1)
- computer science (1)
- computer vision (1)
- concurrent graph rewriting (1)
- conditions (1)
- conflicts and dependencies in (1)
- contract (1)
- conversational agents (1)
- corporate nomadism (1)
- corporate takeovers (1)
- cryptocurrency exchanges (1)
- cryptology (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyberbullying (1)
- cyberwar (1)
- data analytics (1)
- data assimilation (1)
- data driven approaches (1)
- data migration (1)
- data pipeline (1)
- data preparation (1)
- data quality (1)
- data requirements (1)
- data structures and information theory (1)
- data transformation (1)
- data wrangling (1)
- database systems (1)
- deferred choice (1)
- dental caries classification (1)
- depression (1)
- determinism (1)
- deterministic properties (1)
- developmental systems (1)
- digital health (1)
- digital identity (1)
- digital interventions (1)
- digital nomadism (1)
- digital transformation (1)
- digital workplace transformation (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- distributed systems (1)
- drift theory (1)
- educational systems (1)
- efficient deep learning (1)
- elections (1)
- emotional design (1)
- empathy (1)
- engagement (1)
- engine (1)
- engineering (1)
- ethics (1)
- evaluation (1)
- evolutionary computation (1)
- exact simulation methods (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- expression (1)
- external knowledge bases (1)
- failure model (1)
- field-programmable gate array (1)
- forensics (1)
- formal languages (1)
- formal semantics (1)
- functions (1)
- gait analysis algorithm (1)
- gender (1)
- gene (1)
- gene selection (1)
- general (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- graph languages (1)
- graph pattern matching (1)
- graph transformation (1)
- hardware accelerator (1)
- hardware architecture (1)
- health care (1)
- healthcare (1)
- home office (1)
- human–computer interaction (1)
- identity broker (1)
- image captioning (1)
- image processing (1)
- individual effects (1)
- inertial measurement unit (1)
- informal and formal learning (1)
- information diffusion (1)
- interactive technologies (1)
- international human rights (1)
- international humanitarian law (1)
- interpretable machine learning (1)
- intransitivity (1)
- iteration method (1)
- job shop scheduling (1)
- job-shop scheduling (1)
- key competences in physical computing (1)
- knowledge building (1)
- knowledge management (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge work (1)
- labour union education (1)
- law (1)
- law and technology (1)
- learning (1)
- learning factory (1)
- literature review (1)
- localization (1)
- long-term interaction (1)
- longitudinal (1)
- machine (1)
- machine learning algorithms (1)
- manipulation planning (1)
- matrices (1)
- media (1)
- medical malpractice (1)
- memory (1)
- method comparision (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- migration (1)
- mobile application (1)
- mobile learning (1)
- mobile technologies and apps (1)
- modelling (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- mood (1)
- multimedia learning (1)
- multimodal representations (1)
- mutli-task learning (1)
- mutual gaze (1)
- networks (1)
- neural (1)
- neural networks (1)
- new technologies (1)
- news media (1)
- notation (1)
- online learning (1)
- optimal transport (1)
- oracles (1)
- organisational evolution (1)
- paper prototyping (1)
- parallel processing (1)
- parallel rewriting (1)
- performance (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- phone (1)
- physical computing tools (1)
- planning (1)
- poset (1)
- power-law (1)
- predictive models (1)
- prior knowledge (1)
- process scheduling (1)
- processes (1)
- processing (1)
- production planning and control (1)
- program (1)
- programming (1)
- programming skills (1)
- public dataset (1)
- quantified logics (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- real-time (1)
- record linkage (1)
- recursive tuning (1)
- reliability (1)
- remodularization (1)
- remote-first (1)
- representation learning (1)
- resilient architectures (1)
- restoration (1)
- restricted parallelism (1)
- review (1)
- robustness (1)
- search plan generation (1)
- security (1)
- security chaos engineering (1)
- security risk assessment (1)
- self-adaptive multiprocessing system (1)
- self-driving (1)
- self-healing (1)
- self-sovereign identity (1)
- self-supervised learning (1)
- sentiment (1)
- signal processing (1)
- similarity learning (1)
- similarity measures (1)
- simulation (1)
- single event upset (1)
- single-case experimental design (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- smart contracts (1)
- smoother (1)
- social media analysis (1)
- social network analysis (1)
- social networking sites (1)
- societal effects (1)
- software selection (1)
- solar particle event (1)
- space missions (1)
- spread correction (1)
- stable matching (1)
- standardization (1)
- stochastic process (1)
- strongly stable matching (1)
- super stable matching (1)
- survey mode (1)
- systematic literature review (1)
- taxonomy (1)
- teaching (1)
- teamwork (1)
- technical notes and rapid communications (1)
- terminology (1)
- text based classification methods (1)
- tort law (1)
- training (1)
- transfer learning (1)
- trust (1)
- trust model (1)
- uncanny valley (1)
- unsupervised methods (1)
- usability (1)
- user experience (1)
- virtual collaboration (1)
- virtual groups (1)
- virtual reality (1)
- virtual teams (1)
- vocational training (1)
- vulnerabilities (1)
- weakly (1)
- web application (1)
- weight (1)
- well-being (1)
- workflow patterns (1)
- workload prediction (1)
Institute
- Institut für Informatik und Computational Science (47)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (41)
- Hasso-Plattner-Institut für Digital Engineering GmbH (19)
- Fachgruppe Betriebswirtschaftslehre (12)
- Bürgerliches Recht (10)
- Wirtschaftswissenschaften (8)
- Institut für Mathematik (6)
- Institut für Biochemie und Biologie (4)
- Institut für Physik und Astronomie (4)
- Department Erziehungswissenschaft (3)
Homeoffice und mobiles Arbeiten haben sich infolge der Covid-19-Pandemie bei vielen Unternehmen bekanntlich etabliert. Die Anweisung bzw. „Duldung“ des Homeoffice beruhte allerdings meist mehr auf tatsächlicher als auf rechtlicher Grundlage. Letztere könnte aber aus betrieblicher Übung erwachsen. Dieser Beitrag geht dem rechtlichen Rahmen dafür nach.
There is an increasing interest in fusing data from heterogeneous sources. Combining data sources increases the utility of existing datasets, generating new information and creating services of higher quality. A central issue in working with heterogeneous sources is data migration: In order to share and process data in different engines, resource intensive and complex movements and transformations between computing engines, services, and stores are necessary.
Muses is a distributed, high-performance data migration engine that is able to interconnect distributed data stores by forwarding, transforming, repartitioning, or broadcasting data among distributed engines' instances in a resource-, cost-, and performance-adaptive manner. As such, it performs seamless information sharing across all participating resources in a standard, modular manner. We show an overall improvement of 30 % for pipelining jobs across multiple engines, even when we count the overhead of Muses in the execution time. This performance gain implies that Muses can be used to optimise large pipelines that leverage multiple engines.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
Algorithmic management
(2022)
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)]
Arbeitsschutz bei Corona
(2020)
The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays.
Argument mining on twitter
(2021)
In the last decade, the field of argument mining has grown notably. However, only relatively few studies have investigated argumentation in social media and specifically on Twitter. Here, we provide the, to our knowledge, first critical in-depth survey of the state of the art in tweet-based argument mining. We discuss approaches to modelling the structure of arguments in the context of tweet corpus annotation, and we review current progress in the task of detecting argument components and their relations in tweets. We also survey the intersection of argument mining and stance detection, before we conclude with an outlook.
ATIB
(2021)
Identity management is a principle component of securing online services. In the advancement of traditional identity management patterns, the identity provider remained a Trusted Third Party (TTP). The service provider and the user need to trust a particular identity provider for correct attributes amongst other demands. This paradigm changed with the invention of blockchain-based Self-Sovereign Identity (SSI) solutions that primarily focus on the users. SSI reduces the functional scope of the identity provider to an attribute provider while enabling attribute aggregation. Besides that, the development of new protocols, disregarding established protocols and a significantly fragmented landscape of SSI solutions pose considerable challenges for an adoption by service providers. We propose an Attribute Trust-enhancing Identity Broker (ATIB) to leverage the potential of SSI for trust-enhancing attribute aggregation. Furthermore, ATIB abstracts from a dedicated SSI solution and offers standard protocols. Therefore, it facilitates the adoption by service providers. Despite the brokered integration approach, we show that ATIB provides a high security posture. Additionally, ATIB does not compromise the ten foundational SSI principles for the users.
Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics.
Student teachers often struggle to keep track of everything that is happening in the classroom, and particularly to notice and respond when students cause disruptions. The complexity of the classroom environment is a potential contributing factor that has not been empirically tested. In this experimental study, we utilized a virtual reality (VR) classroom to examine whether classroom complexity affects the likelihood of student teachers noticing disruptions and how they react after noticing. Classroom complexity was operationalized as the number of disruptions and the existence of overlapping disruptions (multidimensionality) as well as the existence of parallel teaching tasks (simultaneity). Results showed that student teachers (n = 50) were less likely to notice the scripted disruptions, and also less likely to respond to the disruptions in a comprehensive and effortful manner when facing greater complexity. These results may have implications for both teacher training and the design of VR for training or research purpose. This study contributes to the field from two aspects: 1) it revealed how features of the classroom environment can affect student teachers' noticing of and reaction to disruptions; and 2) it extends the functionality of the VR environment-from a teacher training tool to a testbed of fundamental classroom processes that are difficult to manipulate in real-life.
CloudStrike
(2020)
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient.We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.
COMMIT
(2022)
Composition and functions of microbial communities affect important traits in diverse hosts, from crops to humans. Yet, mechanistic understanding of how metabolism of individual microbes is affected by the community composition and metabolite leakage is lacking. Here, we first show that the consensus of automatically generated metabolic reconstructions improves the quality of the draft reconstructions, measured by comparison to reference models. We then devise an approach for gap filling, termed COMMIT, that considers metabolites for secretion based on their permeability and the composition of the community. By applying COMMIT with two soil communities from the Arabidopsis thaliana culture collection, we could significantly reduce the gap-filling solution in comparison to filling gaps in individual reconstructions without affecting the genomic support. Inspection of the metabolic interactions in the soil communities allows us to identify microbes with community roles of helpers and beneficiaries. Therefore, COMMIT offers a versatile fully automated solution for large-scale modelling of microbial communities for diverse biotechnological applications. <br /> Author summaryMicrobial communities are important in ecology, human health, and crop productivity. However, detailed information on the interactions within natural microbial communities is hampered by the community size, lack of detailed information on the biochemistry of single organisms, and the complexity of interactions between community members. Metabolic models are comprised of biochemical reaction networks based on the genome annotation, and can provide mechanistic insights into community functions. Previous analyses of microbial community models have been performed with high-quality reference models or models generated using a single reconstruction pipeline. However, these models do not contain information on the composition of the community that determines the metabolites exchanged between the community members. In addition, the quality of metabolic models is affected by the reconstruction approach used, with direct consequences on the inferred interactions between community members. Here, we use fully automated consensus reconstructions from four approaches to arrive at functional models with improved genomic support while considering the community composition. We applied our pipeline to two soil communities from the Arabidopsis thaliana culture collection, providing only genome sequences. Finally, we show that the obtained models have 90% genomic support and demonstrate that the derived interactions are corroborated by independent computational predictions.
Local laws on urban policy, i.e., ordinances directly affect our daily life in various ways (health, business etc.), yet in practice, for many citizens they remain impervious and complex. This article focuses on an approach to make urban policy more accessible and comprehensible to the general public and to government officials, while also addressing pertinent social media postings. Due to the intricacies of the natural language, ranging from complex legalese in ordinances to informal lingo in tweets, it is practical to harness human judgment here. To this end, we mine ordinances and tweets via reasoning based on commonsense knowledge so as to better account for pragmatics and semantics in the text. Ours is pioneering work in ordinance mining, and thus there is no prior labeled training data available for learning. This gap is filled by commonsense knowledge, a prudent choice in situations involving a lack of adequate training data. The ordinance mining can be beneficial to the public in fathoming policies and to officials in assessing policy effectiveness based on public reactions. This work contributes to smart governance, leveraging transparency in governing processes via public involvement. We focus significantly on ordinances contributing to smart cities, hence an important goal is to assess how well an urban region heads towards a smart city as per its policies mapping with smart city characteristics, and the corresponding public satisfaction.
Comprior
(2021)
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Das Training sozioemotionaler Kompetenzen ist gerade für Menschen mit Autismus nützlich. Ein solches Training kann mithilfe einer spielbasierten Anwendung effektiv gestaltet werden. Zwei Minispiele, Mimikry und Emo-Mahjong, wurden realisiert und hinsichtlich User Experience evaluiert. Die jeweiligen Konzepte und die Evaluationsergebnisse sollen hier vorgestellt werden.
In this paper, we examine conditioning of the discretization of the Helmholtz problem. Although the discrete Helmholtz problem has been studied from different perspectives, to the best of our knowledge, there is no conditioning analysis for it. We aim to fill this gap in the literature. We propose a novel method in 1D to observe the near-zero eigenvalues of a symmetric indefinite matrix. Standard classification of ill-conditioning based on the matrix condition number is not true for the discrete Helmholtz problem. We relate the ill-conditioning of the discretization of the Helmholtz problem with the condition number of the matrix. We carry out analytical conditioning analysis in 1D and extend our observations to 2D with numerical observations. We examine several discretizations. We find different regions in which the condition number of the problem shows different characteristics. We also explain the general behavior of the solutions in these regions.
In this project I constructed a workflow that takes a DNA sequence as input and provides a phylogenetic tree, consisting of the input sequence and other sequences which were found during a database search. In this phylogenetic tree the sequences are arranged depending on similarities. In bioinformatics, constructing phylogenetic trees is often used to explore the evolutionary relationships of genes or organisms and to understand the mechanisms of evolution itself.
Correction to: Knowledge bases and software support for variant interpretation in precision oncology
(2021)
Many important graph-theoretic notions can be encoded as counting graph homomorphism problems, such as partition functions in statistical physics, in particular independent sets and colourings. In this article, we study the complexity of #(p) HOMSTOH, the problem of counting graph homomorphisms from an input graph to a graph H modulo a prime number p. Dyer and Greenhill proved a dichotomy stating that the tractability of non-modular counting graph homomorphisms depends on the structure of the target graph. Many intractable cases in non-modular counting become tractable in modular counting due to the common phenomenon of cancellation. In subsequent studies on counting modulo 2, however, the influence of the structure of H on the tractability was shown to persist, which yields similar dichotomies. <br /> Our main result states that for every tree H and every prime p the problem #pHOMSTOH is either polynomial time computable or #P-p-complete. This relates to the conjecture of Faben and Jerrum stating that this dichotomy holds for every graph H when counting modulo 2. In contrast to previous results on modular counting, the tractable cases of #pHOMSTOH are essentially the same for all values of the modulo when H is a tree. To prove this result, we study the structural properties of a homomorphism. As an important interim result, our study yields a dichotomy for the problem of counting weighted independent sets in a bipartite graph modulo some prime p. These results are the first suggesting that such dichotomies hold not only for the modulo 2 case but also for the modular counting functions of all primes p.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
In the geoinformatics field, remote sensing data is often used for analyzing the characteristics of the current investigation area. This includes DEMs, which are simple raster grids containing grey scales representing the respective elevation values. The project CREADED that is presented in this paper aims at making these monochrome raster images more significant and more intuitively interpretable. For this purpose, an executable interactive model for creating a colored and relief-shaded Digital Elevation Model (DEM) has been designed using the jABC framework. The process is based on standard jABC-SIBs and SIBs that provide specific GIS functions, which are available as Web services, command line tools and scripts.
Creation of topographic maps
(2014)
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
Social networking sites (SNS) are a rich source of latent information about individual characteristics. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, commercial brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. Predictive evaluation on brands' accounts reveals that Facebook platform provides a slight advantage over Twitter platform in offering more self-disclosure for users' to express their emotions especially their demographic and psychological traits. Results also confirm the wider perspective that the same social media account carry a quite similar and comparable personality scores over different social media platforms. For evaluating our prediction results on actual brands' accounts, we crawled the Facebook API and Twitter API respectively for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
Bitcoin is gaining traction as an alternative store of value. Its market capitalization transcends all other cryptocurrencies in the market. But its high monetary value also makes it an attractive target to cyber criminal actors. Hacking campaigns usually target an ecosystem's weakest points. In Bitcoin, the exchange platforms are one of them. Each exchange breach is a threat not only to direct victims, but to the credibility of Bitcoin's entire ecosystem. Based on an extensive analysis of 36 breaches of Bitcoin exchanges, we show the attack patterns used to exploit Bitcoin exchange platforms using an industry standard for reporting intelligence on cyber security breaches. Based on this we are able to provide an overview of the most common attack vectors, showing that all except three hacks were possible due to relatively lax security. We show that while the security regimen of Bitcoin exchanges is subpar compared to other financial service providers, the use of stolen credentials, which does not require any hacking, is decreasing. We also show that the amount of BTC taken during a breach is decreasing, as well as the exchanges that terminate after being breached. Furthermore we show that overall security posture has improved, but still has major flaws. To discover adversarial methods post-breach, we have analyzed two cases of BTC laundering. Through this analysis we provide insight into how exchange platforms with lax cyber security even further increase the intermediary risk introduced by them into the Bitcoin ecosystem.
Cyber warfare is a timely and relevant issue and one of the most controversial in international humanitarian law (IHL). The aim of IHL is to set rules and limits in terms of means and methods of warfare. In this context, a key question arises: Has digital warfare rules or limits, and if so, how are these applicable? Traditional principles, developed over a long period, are facing a new dimension of challenges due to the rise of cyber warfare. This paper argues that to overcome this new issue, it is critical that new humanity-oriented approaches is developed with regard to cyber warfare. The challenge is to establish a legal regime for cyber-attacks, successfully addressing human rights norms and standards. While clarifying this from a legal perspective, the authors can redesign the sensitive equilibrium between humanity and military necessity, weighing the humanitarian aims of IHL and the protection of civilians-in combination with international human rights law and other relevant legal regimes-in a different manner than before.
Dass Technologien wie Machine Learning-Anwendungen oder Big bzw. Smart Data- Verfahren unbedingt Daten in ausreichender Menge und Güte benötigen, erscheint inzwischen als Binsenweisheit. Vor diesem Hintergrund hat insbesondere der EU-Gesetzgeber für sich zuletzt ein neues Betätigungsfeld entdeckt, indem er versucht, auf unterschiedlichen Wegen Anreize zum Datenteilen zu schaffen, um Innovation zu kreieren. Hierzu zählt auch eine geradezu wohltönend mit ,,Datenaltruismus‘‘ verschlagwortete Konstellation. Der Beitrag stellt die diesbezüglichen Regulierungserwägungen auf supranationaler Ebene dar und nimmt eine erste Analyse vor.
Das Statusfeststellungsverfahren ermöglicht auf Antrag bei der alleinzuständigen Deutschen Rentenversicherung Bund den Erhalt einer verbindlichen Einschätzung der häufig komplizierten und folgenschweren Abgrenzung einer selbstständigen Tätigkeit von einer abhängigen Beschäftigung. Zum 1.4.2022 wurde das Statusfeststellungsverfahren umfassend reformiert. In der Praxis haben sich die eingeführten Novellierungen bislang unterschiedlich bewährt.
Effective query optimization is a core feature of any database management system. While most query optimization techniques make use of simple metadata, such as cardinalities and other basic statistics, other optimization techniques are based on more advanced metadata including data dependencies, such as functional, uniqueness, order, or inclusion dependencies. This survey provides an overview, intuitive descriptions, and classifications of query optimization and execution strategies that are enabled by data dependencies. We consider the most popular types of data dependencies and focus on optimization strategies that target the optimization of relational database queries. The survey supports database vendors to identify optimization opportunities as well as DBMS researchers to find related work and open research questions.
Through the use of next generation sequencing (NGS) technology, a lot of newly sequenced organisms are now available. Annotating those genes is one of the most challenging tasks in sequence biology. Here, we present an automated workflow to find homologue proteins, annotate sequences according to function and create a three-dimensional model.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
Data errors represent a major issue in most application workflows. Before any important task can take place, a certain data quality has to be guaranteed by eliminating a number of different errors that may appear in data. Typically, most of these errors are fixed with data preparation methods, such as whitespace removal. However, the particular error of duplicate records, where multiple records refer to the same entity, is usually eliminated independently with specialized techniques. Our work is the first to bring these two areas together by applying data preparation operations under a systematic approach prior to performing duplicate detection. <br /> Our process workflow can be summarized as follows: It begins with the user providing as input a sample of the gold standard, the actual dataset, and optionally some constraints to domain-specific data preparations, such as address normalization. The preparation selection operates in two consecutive phases. First, to vastly reduce the search space of ineffective data preparations, decisions are made based on the improvement or worsening of pair similarities. Second, using the remaining data preparations an iterative leave-one-out classification process removes preparations one by one and determines the redundant preparations based on the achieved area under the precision-recall curve (AUC-PR). Using this workflow, we manage to improve the results of duplicate detection up to 19% in AUC-PR.
Defining the metaverse
(2023)
The term Metaverse is emerging as a result of the late push by multinational technology conglomerates and a recent surge of interest in Web 3.0, Blockchain, NFT, and Cryptocurrencies. From a scientific point of view, there is no definite consensus on what the Metaverse will be like. This paper collects, analyzes, and synthesizes scientific definitions and the accompanying major characteristics of the Metaverse using the methodology of a Systematic Literature Review (SLR). Two revised definitions for the Metaverse are presented, both condensing the key attributes, where the first one is rather simplistic holistic describing “a three-dimensional online environment in which users represented by avatars interact with each other in virtual spaces decoupled from the real physical world”. In contrast, the second definition is specified in a more detailed manner in the paper and further discussed. These comprehensive definitions offer specialized and general scholars an application within and beyond the scientific context of the system science, information system science, computer science, and business informatics, by also introducing open research challenges. Furthermore, an outlook on the social, economic, and technical implications is given, and the preconditions that are necessary for a successful implementation are discussed.
Trotz erfolgreicher Impfkampagne droht nach dem Sommer eine vierte Infektionswelle der Corona-Pandemie. Ob es dazu kommen wird, hängt maßgeblich davon ab, wie viele Menschen sich für eine Corona-Schutzimpfung entscheiden. Am Impfstoff mangelt es nicht mehr, dafür an der Impfbereitschaft. Viele Arbeitgeber fragen sich daher, was sie unternehmen können, um die Impfquote in ihren Betrieben zu erhöhen.
The aim of our project design space exploration with answer set programming is to develop a general framework based on Answer Set Programming (ASP) that finds valid solutions to the system design problem and simultaneously performs Design Space Exploration (DSE) to find the most favorable alternatives. We leverage recent developments in ASP solving that allow for tight integration of background theories to create a holistic framework for effective DSE.
Spreadsheets are among the most commonly used file formats for data management, distribution, and analysis. Their widespread employment makes it easy to gather large collections of data, but their flexible canvas-based structure makes automated analysis difficult without heavy preparation. One of the common problems that practitioners face is the presence of multiple, independent regions in a single spreadsheet, possibly separated by repeated empty cells. We define such files as "multiregion" files. In collections of various spreadsheets, we can observe that some share the same layout. We present the Mondrian approach to automatically identify layout templates across multiple files and systematically extract the corresponding regions. Our approach is composed of three phases: first, each file is rendered as an image and inspected for elements that could form regions; then, using a clustering algorithm, the identified elements are grouped to form regions; finally, every file layout is represented as a graph and compared with others to find layout templates. We compare our method to state-of-the-art table recognition algorithms on two corpora of real-world enterprise spreadsheets. Our approach shows the best performances in detecting reliable region boundaries within each file and can correctly identify recurring layouts across files.
M-rate 0L systems are interactionless Lindenmayer systems together with a function assigning to every string a set of multisets of productions that may be applied simultaneously to the string. Some questions that have been left open in the forerunner papers are examined, and the computational power of deterministic M-rate 0L systems is investigated, where also tabled and extended variants are taken into consideration.