004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (336)
- Monograph/Edited Volume (166)
- Doctoral Thesis (159)
- Conference Proceeding (54)
- Postprint (50)
- Master's Thesis (10)
- Other (7)
- Preprint (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (596)
- German (192)
- Multiple languages (2)
Keywords
- Informatik (21)
- machine learning (19)
- Didaktik (15)
- Hochschuldidaktik (14)
- Ausbildung (13)
- answer set programming (13)
- Cloud Computing (12)
- cloud computing (12)
- Hasso-Plattner-Institut (10)
- maschinelles Lernen (10)
- Forschungsprojekte (9)
- Future SOC Lab (9)
- Hasso Plattner Institute (9)
- In-Memory Technologie (9)
- Multicore Architekturen (9)
- E-Learning (8)
- Forschungskolleg (8)
- Klausurtagung (8)
- Machine Learning (8)
- Maschinelles Lernen (8)
- Service-oriented Systems Engineering (8)
- Modellierung (7)
- multicore architectures (7)
- openHPI (7)
- research projects (7)
- social media (7)
- Datenintegration (6)
- Geschäftsprozessmanagement (6)
- Prozessmodellierung (6)
- Smalltalk (6)
- Visualisierung (6)
- business process management (6)
- cloud (6)
- Antwortmengenprogrammierung (5)
- Big Data (5)
- Computer Science Education (5)
- Datenschutz (5)
- Identitätsmanagement (5)
- MOOCs (5)
- Ph.D. retreat (5)
- Verifikation (5)
- cyber-physical systems (5)
- data integration (5)
- digital education (5)
- digitale Bildung (5)
- higher education (5)
- identity management (5)
- programming (5)
- quantitative analysis (5)
- security (5)
- service-oriented systems engineering (5)
- verification (5)
- virtual machines (5)
- visualization (5)
- Betriebssysteme (4)
- Design Thinking (4)
- Digitalisierung (4)
- In-Memory technology (4)
- Informatics Education (4)
- Informatikdidaktik (4)
- Infrastruktur (4)
- Künstliche Intelligenz (4)
- Privacy (4)
- Research School (4)
- Semantic Web (4)
- Sicherheit (4)
- Virtualisierung (4)
- Virtuelle Maschinen (4)
- Vorhersage (4)
- artifical intelligence (4)
- blockchain (4)
- digitalization (4)
- education (4)
- evaluation (4)
- graph transformation (4)
- image processing (4)
- in-memory technology (4)
- innovation (4)
- künstliche Intelligenz (4)
- middleware (4)
- nested graph conditions (4)
- operating systems (4)
- privacy (4)
- probabilistic timed systems (4)
- process mining (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research school (4)
- self-sovereign identity (4)
- smart contracts (4)
- software engineering (4)
- 3D visualization (3)
- Algorithmen (3)
- Answer Set Programming (3)
- BPMN (3)
- Bildverarbeitung (3)
- Blockchains (3)
- CityGML (3)
- Cloud (3)
- Competence Measurement (3)
- Computer Networks (3)
- Computernetzwerke (3)
- DPLL (3)
- Data profiling (3)
- Datenanalyse (3)
- Datenbank (3)
- Datenbanken (3)
- Graphtransformationen (3)
- HCI (3)
- IPv4 (3)
- IPv6 (3)
- Identität (3)
- Informatics (3)
- Informatikstudium (3)
- Informationsextraktion (3)
- Infrastructure (3)
- Innovation (3)
- Internet Protocol (3)
- Internet of Things (3)
- JSP (3)
- Klassifikation (3)
- Kompetenzen (3)
- Laufzeitmodelle (3)
- Lively Kernel (3)
- MOOC (3)
- Mensch-Computer-Interaktion (3)
- Middleware (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Model-Driven Engineering (3)
- Modeling (3)
- Navigation (3)
- Network Politics (3)
- Netzpolitik (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Optimization (3)
- Ph.D. Retreat (3)
- Process Mining (3)
- Process Modeling (3)
- SAT (3)
- Secondary Education (3)
- Softwareentwicklung (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- Tripel-Graph-Grammatik (3)
- Werkzeuge (3)
- abstraction (3)
- algorithms (3)
- bibliometric analysis (3)
- business processes (3)
- citation analysis (3)
- clustering (3)
- collaboration (3)
- computational thinking (3)
- computer science (3)
- computer vision (3)
- conference (3)
- data preparation (3)
- data profiling (3)
- database systems (3)
- debugging (3)
- didactics (3)
- distributed systems (3)
- duplicate detection (3)
- geospatial data (3)
- graph transformation systems (3)
- informatics (3)
- model (3)
- model-driven engineering (3)
- modellgetriebene Entwicklung (3)
- models (3)
- non-photorealistic rendering (3)
- outlier detection (3)
- prediction (3)
- real-time (3)
- simulation (3)
- systems biology (3)
- tele-TASK (3)
- tools (3)
- trust (3)
- user experience (3)
- virtual reality (3)
- virtualization (3)
- virtuelle Maschinen (3)
- 3D point clouds (2)
- 3D-Geovisualisierung (2)
- 3D-Punktwolken (2)
- 3D-Stadtmodelle (2)
- 3D-Visualisierung (2)
- ACINQ (2)
- ASIC (2)
- AUTOSAR (2)
- Abstraktion (2)
- Algorithmic Game Theory (2)
- Algorithmische Spieltheorie (2)
- Algorithms (2)
- Analyse (2)
- Anomalieerkennung (2)
- Artificial Intelligence (2)
- Aspektorientierte Softwareentwicklung (2)
- Assessment (2)
- Assoziationsregeln (2)
- Asynchrone Schaltung (2)
- Augmented reality (2)
- Australian securities exchange (2)
- Authentifizierung (2)
- Automatisches Beweisen (2)
- BCCC (2)
- BPM (2)
- BTC (2)
- Bayesian networks (2)
- Bibliometrics (2)
- BitShares (2)
- Bitcoin (2)
- Bitcoin Core (2)
- Blended Learning (2)
- Blockchain (2)
- Blockchain Auth (2)
- Blockchain-Konsortium R3 (2)
- Blockkette (2)
- Blockstack (2)
- Blockstack ID (2)
- Blumix-Plattform (2)
- Blöcke (2)
- Bounded Model Checking (2)
- Business Process Management (2)
- Byzantine Agreement (2)
- CSC (2)
- CSCW (2)
- Classification (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Clusteranalyse (2)
- Code (2)
- Colored Coins (2)
- Competence Modelling (2)
- Computational thinking (2)
- Computer Science (2)
- Computergrafik (2)
- Computersicherheit (2)
- Computing (2)
- Constraint Solving (2)
- DAO (2)
- DPoS (2)
- Data Integration (2)
- Data Modeling (2)
- Data Privacy (2)
- Data Profiling (2)
- Databases (2)
- Datenaufbereitung (2)
- Datenbanksysteme (2)
- Datenmodellierung (2)
- Datenqualität (2)
- Deduction (2)
- Deep learning (2)
- Delegated Proof-of-Stake (2)
- Delphi study (2)
- Distributed Proof-of-Research (2)
- Diversity (2)
- Duplikaterkennung (2)
- E-Wallet (2)
- ECDSA (2)
- EEG (2)
- Echtzeit (2)
- Echtzeit-Rendering (2)
- Economics (2)
- Electronic and spintronic devices (2)
- Eris (2)
- Ether (2)
- Ethereum (2)
- European Bioinformatics Institute (2)
- European Union (2)
- Europäische Union (2)
- Evolution (2)
- Exploration (2)
- FMC (2)
- FPGA (2)
- Federated Byzantine Agreement (2)
- Feedback (2)
- Fehlende Daten (2)
- Fehlertoleranz (2)
- FollowMyVote (2)
- Fork (2)
- Formale Verifikation (2)
- GPU (2)
- Game Dynamics (2)
- General Earth and Planetary Sciences (2)
- Geodaten (2)
- Geography, Planning and Development (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Gridcoin (2)
- HPI Schul-Cloud (2)
- Hard Fork (2)
- Hashed Timelock Contracts (2)
- Hauptspeicherdatenbank (2)
- Hochschullehre (2)
- ICA (2)
- ICT (2)
- ICT competencies (2)
- ISSEP (2)
- IT-Infrastruktur (2)
- IT-infrastructure (2)
- In-Memory (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Internet (2)
- Internet Service Provider (2)
- Internet der Dinge (2)
- IoT (2)
- Japanese Blockchain Consortium (2)
- Japanisches Blockchain-Konsortium (2)
- Java (2)
- Kette (2)
- Key Competencies (2)
- Klausellernen (2)
- Knowledge Representation and Reasoning (2)
- Kollaborationen (2)
- Kompetenz (2)
- Konferenz (2)
- Konsensalgorithmus (2)
- Konsensprotokoll (2)
- Learning Analytics (2)
- Lightning Network (2)
- Link-Entdeckung (2)
- Live-Programmierung (2)
- Lock-Time-Parameter (2)
- Logic Programming (2)
- Logics (2)
- MERLOT (2)
- Machine learning (2)
- Measurement (2)
- Megamodell (2)
- Metaverse (2)
- Micropayment-Kanäle (2)
- Microsoft Azur (2)
- Model Synchronization (2)
- Modell (2)
- Modellprüfung (2)
- Mustererkennung (2)
- NASDAQ (2)
- NameID (2)
- Namecoin (2)
- Off-Chain-Transaktionen (2)
- Onename (2)
- Online Course (2)
- Online-Learning (2)
- Ontologie (2)
- OpenBazaar (2)
- Oracles (2)
- Orphan Block (2)
- P2P (2)
- Patterns (2)
- Peer-to-Peer Netz (2)
- Peercoin (2)
- Planing (2)
- PoB (2)
- PoS (2)
- PoW (2)
- Preis der Anarchie (2)
- Price of Anarchy (2)
- Privatsphäre (2)
- Problem Solving (2)
- Process (2)
- Programmieren (2)
- Programmierung (2)
- Proof-of-Burn (2)
- Proof-of-Stake (2)
- Proof-of-Work (2)
- Prozess (2)
- Python (2)
- RDF (2)
- Relevanz (2)
- Ressourcenoptimierung (2)
- Ripple (2)
- Runtime analysis (2)
- SCP (2)
- SHA (2)
- SPV (2)
- SQL (2)
- STG decomposition (2)
- STG-Dekomposition (2)
- Satisfiability (2)
- Schule (2)
- Schwierigkeitsgrad (2)
- Second Life (2)
- Semiconductors (2)
- Service-Oriented Architecture (2)
- Service-Orientierte Architekturen (2)
- Simplified Payment Verification (2)
- Skalierbarkeit der Blockchain (2)
- Slock.it (2)
- Social (2)
- Soft Fork (2)
- Software Engineering (2)
- Softwarearchitektur (2)
- Steemit (2)
- Stellar Consensus Protocol (2)
- Storj (2)
- Studie (2)
- SysML (2)
- Systematics (2)
- Systemsoftware (2)
- Systemstruktur (2)
- Taxonomy (2)
- Teamarbeit (2)
- Temporallogik (2)
- Texturen (2)
- The Bitfury Group (2)
- The DAO (2)
- Theorembeweisen (2)
- Theoretische Informatik (2)
- Theory (2)
- Timed Automata (2)
- Transaktion (2)
- Twitter (2)
- Two-Way-Peg (2)
- UX (2)
- Unifikation (2)
- Unspent Transaction Output (2)
- User Experience (2)
- VM (2)
- Verhalten (2)
- Verlässlichkeit (2)
- Versionsverwaltung (2)
- Verträge (2)
- Virtual Reality (2)
- Visualization (2)
- Water Science and Technology (2)
- Watson IoT (2)
- YouTube (2)
- Zielvorgabe (2)
- Zookos Dreieck (2)
- Zookos triangle (2)
- adaptive (2)
- adaptive Systeme (2)
- adaptive systems (2)
- altchain (2)
- alternative chain (2)
- anomaly detection (2)
- anxiety (2)
- architecture (2)
- atomic swap (2)
- attribute assurance (2)
- authentication (2)
- authorship attribution (2)
- batch processing (2)
- bidirectional payment channels (2)
- big data (2)
- big data services (2)
- bitcoins (2)
- blockchain consortium (2)
- blockchain-übergreifend (2)
- blocks (2)
- blumix platform (2)
- bounded model checking (2)
- causal discovery (2)
- causal structure learning (2)
- chain (2)
- classification (2)
- cloud security (2)
- co-citation analysis (2)
- co-occurrence analysis (2)
- code (2)
- competence (2)
- competencies (2)
- complexity (2)
- comprehension (2)
- computer graphics (2)
- computer science education (2)
- computer security (2)
- confirmation period (2)
- consensus algorithm (2)
- consensus protocol (2)
- consistency (2)
- contest period (2)
- continuous integration (2)
- contracts (2)
- conversational agents (2)
- cross-chain (2)
- cyber-physische Systeme (2)
- cyberbullying (2)
- data (2)
- data analytics (2)
- data mining (2)
- data quality (2)
- data wrangling (2)
- decentralized autonomous organization (2)
- deep learning (2)
- deferred choice (2)
- dependability (2)
- depression (2)
- design thinking (2)
- dezentrale autonome Organisation (2)
- difficulty (2)
- difficulty target (2)
- digital enlightenment (2)
- digital health (2)
- digital identity (2)
- digital learning platform (2)
- digital sovereignty (2)
- digital transformation (2)
- digital whiteboard (2)
- digitale Aufklärung (2)
- digitale Lernplattform (2)
- digitale Souveränität (2)
- direct manipulation (2)
- doppelter Hashwert (2)
- double hashing (2)
- dynamic reconfiguration (2)
- e-Learning (2)
- e-learning (2)
- empathy (2)
- engagement (2)
- exploratory programming (2)
- fault tolerance (2)
- federated voting (2)
- formal semantics (2)
- functional dependencies (2)
- funktionale Abhängigkeiten (2)
- gender (2)
- geovisualization (2)
- graph constraints (2)
- hashrate (2)
- human computer interaction (2)
- identity (2)
- identity theory (2)
- image stylization (2)
- inclusion dependencies (2)
- incremental graph pattern matching (2)
- index selection (2)
- individual effects (2)
- informatics education (2)
- information extraction (2)
- intelligente Verträge (2)
- inter-chain (2)
- interactive technologies (2)
- intrusion detection (2)
- job shop scheduling (2)
- k-inductive invariant checking (2)
- kausale Entdeckung (2)
- kausales Strukturlernen (2)
- key competencies (2)
- knowledge management (2)
- knowledge representation and nonmonotonic reasoning (2)
- kontinuierliche Integration (2)
- law (2)
- learning (2)
- lebenslanges Lernen (2)
- ledger assets (2)
- lifelong learning (2)
- live programming (2)
- liveness (2)
- logic programming (2)
- longitudinal (2)
- maschinelles Sehen (2)
- memory (2)
- merged mining (2)
- merkle root (2)
- method comparision (2)
- micropayment (2)
- micropayment channels (2)
- miner (2)
- mining (2)
- mining hardware (2)
- minting (2)
- missing data (2)
- mobile (2)
- mobile mapping (2)
- model checking (2)
- model transformation (2)
- modeling (2)
- modelling (2)
- monitoring (2)
- navigation (2)
- networks (2)
- neural networks (2)
- news media (2)
- nonce (2)
- off-chain transaction (2)
- oracles (2)
- parallel processing (2)
- peer-to-peer network (2)
- pegged sidechains (2)
- perception of robots (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- production planning and control (2)
- propositional satisfiability (2)
- quorum slices (2)
- real-time systems (2)
- relevance (2)
- representation learning (2)
- rootstock (2)
- runtime models (2)
- scalability of blockchain (2)
- scarce tokens (2)
- schema discovery (2)
- search (2)
- selection (2)
- self-driving (2)
- service-oriented systems (2)
- sidechain (2)
- smalltalk (2)
- social network analysis (2)
- societal effects (2)
- software development (2)
- solver (2)
- standardization (2)
- stochastic Petri nets (2)
- stochastische Petri Netze (2)
- synchronization (2)
- systematic literature review (2)
- systems of systems (2)
- systems software (2)
- taxonomy (2)
- teacher training (2)
- teamwork (2)
- testing (2)
- text based classification methods (2)
- textures (2)
- theorem (2)
- tiefes Lernen (2)
- timed automata (2)
- topics (2)
- transaction (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- usability (2)
- user interaction (2)
- user-generated content (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtual 3D city models (2)
- virtuelle 3D-Stadtmodelle (2)
- vocational training (2)
- wearables (2)
- workflow patterns (2)
- "Big Data"-Dienste (1)
- 'Peer To Peer' (1)
- (FPGA) (1)
- 0-day (1)
- 21st century skills, (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Drucken (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D Semiotik (1)
- 3D Visualisierung (1)
- 3D city model (1)
- 3D city models (1)
- 3D computer graphics (1)
- 3D geovisualisation (1)
- 3D geovisualization (1)
- 3D lenses (1)
- 3D point cloud (1)
- 3D portrayal (1)
- 3D printing (1)
- 3D semiotics (1)
- 3D-Punktwolke (1)
- 3D-Rendering (1)
- 3D-Stadtmodell (1)
- 3DCityDB (1)
- 3d city models (1)
- 47A52 (1)
- 65R20 (1)
- 65R32 (1)
- 78A46 (1)
- ABRACADABRA (1)
- ADFS (1)
- AMNET (1)
- APT (1)
- APX-hardness (1)
- ARCS Modell (1)
- Abbrecherquote (1)
- Abhängigkeiten (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Accepting Grammars (1)
- Achievement (1)
- Ackerschmalwand (1)
- Active Directory Federation Services (1)
- Active Evaluation (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Adaption (1)
- Adaptive hypermedia (1)
- Advanced Persistent Threats (1)
- Advanced Video Codec (AVC) (1)
- Adversarial Learning (1)
- Agile (1)
- Agilität (1)
- Aktive Evaluierung (1)
- Aktivitäten (1)
- Akzeptierende Grammatiken (1)
- Alcohol Use Disorders Identification Test (1)
- Alcohol use assessment (1)
- Algebraic methods (1)
- Algorithmenablaufplanung (1)
- Algorithmenkonfiguration (1)
- Algorithmenselektion (1)
- Alignment (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Andere Fachrichtungen (1)
- Android Security (1)
- Anerkennung (1)
- Anfrageoptimierung (1)
- Anfragepaare (1)
- Anfragesprache (1)
- Angewandte Spieltheorie (1)
- Angriffe (1)
- Angriffserkennung (1)
- Animal building (1)
- Anisotroper Kuwahara Filter (1)
- Anleitung (1)
- Anomalien (1)
- Anrechnung (1)
- Antwortmengen Programmierung (1)
- Anwendungsvirtualisierung (1)
- Application (1)
- Application Server (1)
- Applied Game Theory (1)
- Apriori (1)
- Arabidopsis thaliana (1)
- Architektur (1)
- Architekturadaptation (1)
- Archivanalyse (1)
- Arduino (1)
- Argument Mining (1)
- Artem Erkomaishvili (1)
- Artificial neural networks (1)
- Arzt-Patient-Beziehung (1)
- Aspect-Oriented Programming (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Association Rule Mining (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Attribute aggregation (1)
- Attributsicherung (1)
- Audience Response Systeme (1)
- Aufzählung (1)
- Augmented Reality (1)
- Augmented and virtual reality (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausführungssemantiken (1)
- Ausreissererkennung (1)
- Ausreißererkennung (1)
- Austria (1)
- Auswirkungen (1)
- Authentication (1)
- Authorization (1)
- Autismus (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- Autorisierung (1)
- BCH (1)
- BCI (1)
- BSS (1)
- Bachelor (1)
- Bachelorstudierende der Informatik (1)
- Bachelorstudium (1)
- Bahnwesen (1)
- Bank (1)
- Barrierefreiheit (1)
- Basic Service (1)
- Basic Storage Anbieter (1)
- Batchprozesse (1)
- Batchverarbeitung (1)
- Baumweite (1)
- Bayes'sche Netze (1)
- Bayessche Netze (1)
- Bean (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Bedrohungserkennung (1)
- Behavior (1)
- Behavior change (1)
- Behaviour Analysis (1)
- Benutzerinteraktion (1)
- Benutzeroberfläche (1)
- Berufsausbildung (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Betrachtungsebenen (1)
- Beweisaufgaben (1)
- Beweistheorie (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Binäres Entscheidungsdiagramm (1)
- Bioacoustics (1)
- Bioakustik (1)
- Biocomputing (1)
- Bioelektrisches Signal (1)
- Bioinformatik (1)
- Biometrie (1)
- Bisimulation (1)
- Blended learning (1)
- Blockheizkraftwerke (1)
- Bloom’s Taxonomy (1)
- Boolean constraint solver (1)
- Bounded Backward Model Checking (1)
- Brain Computer Interface (1)
- Brownian motion with discontinuous drift (1)
- Business Process Models (1)
- Business process modeling (1)
- Bystander (1)
- C++ tool (1)
- C-Test (1)
- CCS Concepts (1)
- CEP (1)
- COVID-19 (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Cactus (1)
- Calibration (1)
- Canvas (1)
- Capability approach (1)
- Carrera Digital D132 (1)
- Case Management (1)
- Case management (1)
- Challenges (1)
- Change Management (1)
- Choreographien (1)
- Citymodel (1)
- Clause Learning (1)
- Clinical predictive modeling (1)
- Cloud Datenzentren (1)
- Cloud computing (1)
- Clustering (1)
- Coccinelle (1)
- Codeverständnis (1)
- Codierung (1)
- Cognitive Skills (1)
- Cographs (1)
- Coherent partition (1)
- Common Spatial Pattern (1)
- Commonsense reasoning (1)
- Community analysis (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Complementary Circuits (1)
- Complexity (1)
- Compliance (1)
- Compliance checking (1)
- Composition (1)
- Compound Values (1)
- Computation Tree Logic (1)
- Computational Complexity (1)
- Computational Hardness (1)
- Computational Photography (1)
- Computational Thinking (1)
- Computational photography (1)
- Computer Science in Context (1)
- Computer crime (1)
- Computergestützes Training (1)
- Conceptual (1)
- Conceptual modeling (1)
- Condition number (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Consistency (1)
- Constraint (1)
- Constraint-Programmierung (1)
- Constraints (1)
- Constructive solid geometry (1)
- Contest (1)
- Context-oriented Programming (1)
- Contextualisation (1)
- Contracts (1)
- Contradictions (1)
- Controlled Derivations (1)
- Controller-Resynthese (1)
- Convolution (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covariate Shift (1)
- Covid (1)
- Creative (1)
- Crime mapping (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Customer ownership (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-Sicherheit (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DBMS (1)
- DDoS (1)
- DNA (1)
- DNA computing (1)
- DNS (1)
- Data Analysis (1)
- Data Dependency (1)
- Data Literacy (1)
- Data Management (1)
- Data Quality (1)
- Data Science (1)
- Data Structure Optimization (1)
- Data Warehouse (1)
- Data dependencies (1)
- Data integration (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-Mining (1)
- Data-Science (1)
- Data-centric (1)
- Database (1)
- Database Cost Model (1)
- Dateistruktur (1)
- Daten (1)
- Datenabhängigkeiten (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenbank-Kostenmodell (1)
- Datenbankoptimierung (1)
- Datenextraktion (1)
- Datenflusskorrektheit (1)
- Datenkorrektheit (1)
- Datenmodelle (1)
- Datenobjekte (1)
- Datenreinigung (1)
- Datensatz (1)
- Datenschutz-sicherer Einsatz in der Schule (1)
- Datensicht (1)
- Datenstrukturoptimierung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvertraulichkeit (1)
- Datenverwaltung für Daten mit räumlich-zeitlichem Bezug (1)
- Datenvisualisierung (1)
- Datenzustände (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decision support (1)
- Deep Learning (1)
- Defining characteristics of physical computing (1)
- Degenerationsprozesse (1)
- Dekubitus (1)
- Delphine (1)
- Delta preservation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Denkweise (1)
- Dependency discovery (1)
- Description Logics (1)
- Design-Forschung (1)
- Deskriptive Logik (1)
- Deurema Modellierungssprache (1)
- Diagonalisierung (1)
- Didaktik der Informatik (1)
- Didaktische Konzepte (1)
- Dienstkomposition (1)
- Dienstplattform (1)
- Differential Privacy (1)
- Differenz von Gauss Filtern (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Engineering (1)
- Digital Game Based Learning (1)
- Digital Revolution (1)
- Digital image analysis (1)
- Digitale Transformation (1)
- Digitale Whiteboards (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Digitalstrategie (1)
- Direkte Manipulation (1)
- Disambiguierung (1)
- Discrimination Networks (1)
- Diskussionskultur (1)
- Dispositional learning analytics (1)
- Distanzlehre (1)
- Distributed Computing (1)
- Distributed computing (1)
- Distributed programming (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Diversität (1)
- Dolphins (1)
- Domänenspezifische Modellierung (1)
- Dreidimensionale Computergraphik (1)
- Dubletten (1)
- Duplicate Detection (1)
- Durchlässigkeit (1)
- Dynamic Programming (1)
- Dynamic Type System (1)
- Dynamic assessment (1)
- Dynamische Programmierung (1)
- Dynamische Rekonfiguration (1)
- Dynamische Typ Systeme (1)
- EHR (1)
- EPA (1)
- Early Literacy (1)
- Echtzeitanwendung (1)
- Echtzeitsysteme (1)
- Ecosystems (1)
- Educational Standards (1)
- Educational software (1)
- Effizienz (1)
- Einbruchserkennung (1)
- Eingabegenauigkeit (1)
- Eingebettete Systeme (1)
- Eisenbahnnetz (1)
- Elektroencephalographie (1)
- Elektronische Patientenakte (1)
- Embedded Systems (1)
- Emotionen (1)
- Emotionsforschung (1)
- Empfehlungen (1)
- Empirische Untersuchung (1)
- Endpunktsicherheit (1)
- Energieeffizienz (1)
- Energiesparen (1)
- Enterprise Search (1)
- Entity resolution (1)
- Entitätsverknüpfung (1)
- Entscheidungsbäume (1)
- Entscheidungsfindung (1)
- Entscheidungsmanagement (1)
- Entscheidungsmodelle (1)
- Entwicklungswerkzeuge (1)
- Entwurf (1)
- Entwurfsmuster (1)
- Entwurfsmuster für SOA-Sicherheit (1)
- Entwurfsraumexploration (1)
- Enumeration algorithm (1)
- Equilibrium logic (1)
- Ereignisabstraktion (1)
- Ereignisse (1)
- Erfahrungsbericht (1)
- Erfolgsmessung (1)
- Erfüllbarkeit einer Formel der Aussagenlogik (1)
- Erfüllbarkeitsanalyse (1)
- Erfüllbarkeitsproblem (1)
- Erkennen von Meta-Daten (1)
- Erkennung von Metadaten (1)
- Error Estimation (1)
- Error-Detection Circuits (1)
- Erweiterte Realität (1)
- Escherichia-coli (1)
- Estimation-of-distribution algorithm (1)
- Ethics (1)
- Euclid’s algorithm (1)
- Evaluation (1)
- Evidenztheorie (1)
- Evolution in MDE (1)
- Evolutionary algorithms (1)
- Execution Semantics (1)
- Explorative Datenanalyse (1)
- Exponential Time Hypothesis (1)
- Exponentialzeit Hypothese (1)
- Extract-Transform-Load (ETL) (1)
- FIDO (1)
- FMC-QE (1)
- FOSS (1)
- FRP (1)
- Facebook (1)
- Fachinformatik (1)
- Fachinformatiker (1)
- Fallmanagement (1)
- Fallstudie (1)
- Feature Combination (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback Loop Modellierung (1)
- Feedback Loops (1)
- Fehlerbeseitigung (1)
- Fehlererkennung (1)
- Fehlerinjektion (1)
- Fehlerkorrektur (1)
- Fehlerschätzung (1)
- Fehlersuche (1)
- Fehlvorstellung (1)
- Fernerkundung (1)
- Fertigkeiten (1)
- Fertigung (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fintech (1)
- Fitness-distance correlation (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Formal modelling (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Formeln der quantifizierten Aussagenlogik (1)
- Forschung (1)
- Fredholm complexes (1)
- Function (1)
- Functional Lenses (1)
- Functional dependencies (1)
- Fundamental Ideas (1)
- Fundamental Modeling Concepts (1)
- Fußgängernavigation (1)
- GIS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Game-based learning (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebäudemodelle (1)
- Gehirn-Computer-Schnittstelle (1)
- Geländemodelle (1)
- Gender (1)
- Gene expression (1)
- General subject “Information” (1)
- Generalisierung (1)
- Generalized Discrimination Networks (1)
- Geometrieerzeugung (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Geovisualisierung (1)
- German schools (1)
- Geschäftsanwendungen (1)
- Geschäftsmodell (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmodelle (1)
- Gesetze (1)
- Gesichtsausdruck (1)
- Gesteuerte Ableitungen (1)
- Gewinnung benannter Entitäten (1)
- GitHub (1)
- Gleichheit (1)
- Globus (1)
- GraalVM (1)
- Grammar Systems (1)
- Grammatikalische Inferenz (1)
- Grammatiksysteme (1)
- Graph algorithm (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graph-Constraints (1)
- Graph-Mining (1)
- Graph-basierte Suche (1)
- Graph-basiertes Ranking (1)
- Graphableitung (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphensuche (1)
- Graphfärbung (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Grid (1)
- Grid Computing (1)
- Gruppierung von Prozessinstanzen (1)
- H.264 (1)
- HDI (1)
- HEI (1)
- HENSHIN (1)
- HPI Forschung (1)
- HPI research (1)
- Hardware accelerator (1)
- Hardware-Software-Co-Design (1)
- Hasserkennung (1)
- Hasso-Plattner-Institute (1)
- Hauptkomponentenanalyse (1)
- Hauptspeicher Datenmanagement (1)
- Hauptspeicher Technologie (1)
- Helmholtz problem (1)
- Herodotos (1)
- Heterogenität (1)
- Heuristic triangle estimation (1)
- Heuristiken (1)
- HiGHmed (1)
- High-Level Synthesis (1)
- Histograms (1)
- History of pattern occurrences (1)
- Hochschule (1)
- Hochschulkurse (1)
- Hochschulsystem (1)
- Homomorphe Verschlüsselung (1)
- Human (1)
- Human-robot interaction (1)
- Hyrise (1)
- Häkeln (1)
- I/O-effiziente Algorithmen (1)
- IBM 360 (1)
- ICT Competence (1)
- ICT curriculum (1)
- ICT skills (1)
- IDS (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- IT security (1)
- IT-Security (1)
- IT-Sicherheit (1)
- Ideation (1)
- Ideenfindung (1)
- Identity Management (1)
- Identity management systems (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Impact (1)
- Imperative calculi (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- Improving classroom (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Index (1)
- Index Structures (1)
- Indexauswahl (1)
- Indexstrukturen (1)
- Individuen (1)
- Industries (1)
- Industry 4.0 (1)
- Inference (1)
- Infinite State (1)
- Informatik B. Sc. (1)
- Informatik für alle (1)
- Informatik-Studiengänge (1)
- Informatiksystem (1)
- Informatikunterricht (1)
- Informatikvoraussetzungen (1)
- Information Ethics (1)
- Information Extraction (1)
- Information Systems (1)
- Information Transfer Rate (1)
- Informationskompetenz (1)
- Informationssysteme (1)
- Informationsvorhaltung (1)
- Informatische Kompetenzen (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Initial conflicts (1)
- Inklusionsabhängigkeit (1)
- Inklusionsabhängigkeiten (1)
- Inkonsistenz (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Input Validation (1)
- Inquiry-based Learning (1)
- Instagram (1)
- Insurance industry (1)
- Integration (1)
- Interactive Rendering (1)
- Interactive system (1)
- Interaktionsmodel (1)
- Interaktionsmodellierung (1)
- Interaktionstechniken (1)
- Interaktives Rendering (1)
- Interaktives System (1)
- Interdisciplinary Teams (1)
- Interface design (1)
- Internet Security (1)
- Internet applications (1)
- Internet-Sicherheit (1)
- Internetanwendungen (1)
- Interpretability (1)
- Interpreter (1)
- Intersectionality (1)
- Interval Timed Automata (1)
- Interventionen (1)
- Intuition (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Inverted Classroom (1)
- JCop (1)
- Java 2 Enterprise Edition (1)
- Java Security Framework (1)
- Java Virtual Machine (1)
- KI (1)
- Karten (1)
- Kartografisches Design (1)
- Kausalität (1)
- Kern-PCA (1)
- Kernel (1)
- Kernmethoden (1)
- Klassifikator-Kalibrierung (1)
- Klassifizierung (1)
- Kollaboration (1)
- Kommunikation (1)
- Kompetenzentwicklung (1)
- Kompetenzerwerb (1)
- Kompetenzmessung (1)
- Komplexität (1)
- Komplexität der Berechnung (1)
- Komplexitätsbewältigung (1)
- Komplexitätstheorie (1)
- Komposition (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontext (1)
- Konzeptionell (1)
- Kreativität (1)
- Kryptographie (1)
- Kundenverhalten (1)
- Kunstanalyse (1)
- Kybernetik (1)
- LEGO Mindstorms EV3 (1)
- LIDAR (1)
- LMS (1)
- LOD (1)
- LSTM (1)
- Landmarken (1)
- Large networks (1)
- Laser Cutten (1)
- Laserscanning (1)
- Lastverteilung (1)
- Laufzeitanalyse (1)
- Laufzeitverhalten (1)
- Leadership (1)
- Learners (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lebendigkeit (1)
- Lebenslanges Lernen (1)
- Lefschetz number (1)
- Leftmost Derivations (1)
- Lehr- und Lernformate (1)
- Lehramtsstudium (1)
- Lehre (1)
- Lehrer (1)
- Lehrer*innenbildung (1)
- Lehrevaluation (1)
- Leistungsfähigkeit (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Leistungsvorhersage (1)
- Lern-App (1)
- Lernerfolg (1)
- Lernmotivation (1)
- Lernsoftware (1)
- Lernzentrum (1)
- LiDAR (1)
- Licenses (1)
- Liguistisch (1)
- Lindenmayer systems (1)
- Link Discovery (1)
- Linked Data (1)
- Linked Open Data (1)
- Linksableitungen (1)
- Live-Migration (1)
- Liveness (1)
- Logarithm (1)
- Logikkalkül (1)
- Logiksynthese (1)
- Loss (1)
- Low Latency (1)
- Lower Bounds (1)
- Lower Secondary Level (1)
- Lösungsraum (1)
- Lückentext (1)
- MDE Ansatz (1)
- MDE settings (1)
- MEG (1)
- Machine-Learning (1)
- Machinelles Lernen (1)
- Magnetoencephalographie (1)
- Malware (1)
- Management (1)
- Marketing (1)
- Marktübersicht (1)
- Maschinen (1)
- Massive Open Online Courses (1)
- Matrizen-Eigenwertaufgabe (1)
- Matroids (1)
- Media in education (1)
- Megamodel (1)
- Megamodels (1)
- Mehr-Faktor-Authentifizierung (1)
- Mehrfamilienhäuser (1)
- Mehrkernsysteme (1)
- Mehrklassen-Klassifikation (1)
- Messung (1)
- Metacrate (1)
- Metadata Discovery (1)
- Metadaten (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Metamodell (1)
- Migration (1)
- Mindset (1)
- Minimal hitting set (1)
- Mischmodelle (1)
- Mischung <Signalverarbeitung> (1)
- Mobil (1)
- Mobile Application Development (1)
- Mobile Mapping (1)
- Mobile learning (1)
- Mobile-Mapping (1)
- Mobiles Lernen (1)
- Mobilgeräte (1)
- Model Based Engineering (1)
- Model Checking (1)
- Model Consistency (1)
- Model Driven Architecture (1)
- Model Execution (1)
- Model Management (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Model-driven SOA Security (1)
- Modeling Languages (1)
- Modell Management (1)
- Modell-driven Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Modell-getriebene Sicherheit (1)
- Modell-getriebene Softwareentwicklung (1)
- Modellbasiert (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetrieben (1)
- Modellgetriebene Architektur (1)
- Modellgetriebene Entwicklung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellkonsistenz (1)
- Modellreparatur (1)
- Modelltransformation (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Molekulare Bioinformatik (1)
- Monitoring (1)
- Morphic (1)
- Multi Task Learning (1)
- Multi-Class (1)
- Multi-Instanzen (1)
- Multi-Task-Lernen (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Multimodal behavior (1)
- Multiprocessor (1)
- Multiprozessor (1)
- Music Technology (1)
- Muster (1)
- Musterabgleich (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NUI (1)
- Nash Equilibrium (1)
- Natural Science Education (1)
- Natural ventilation (1)
- Nebenläufigkeit (1)
- Nephrology (1)
- Nested Graph Conditions (1)
- Nested graph conditions (1)
- Network Creation Game (1)
- Network clustering (1)
- Netzneutralität (1)
- Netzwerk (1)
- Netzwerke (1)
- Netzwerkprotokolle (1)
- Neuronales Netz (1)
- New On-Line Error-Detection Methode (1)
- Newspeak (1)
- Next Generation Network (1)
- Nicht-photorealistisches Rendering (1)
- Nichtfotorealistische Bildsynthese (1)
- NoSQL (1)
- Non-photorealistic Rendering (1)
- Norway (1)
- Novice programmers (1)
- Nutzerinteraktion (1)
- Nutzungsinteresse (1)
- O (1)
- OAuth (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objects (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objekte (1)
- Objektive Schwierigkeit (1)
- Objektlebenszyklus-Synchronisation (1)
- Omega (1)
- Online Learning Environments (1)
- Onlinekurse (1)
- Onlinelehre (1)
- Ontologies (1)
- Ontology (1)
- Open Source (1)
- Open source (1)
- OpenID Connect (1)
- OpenOLAT (1)
- Opinion mining (1)
- Optimierungen (1)
- Optimierungsproblem (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Organisationsveränderung (1)
- Overlapping community detection (1)
- Owner-Retained Access Control (ORAC) (1)
- PAVM (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Parallel Programming (1)
- Parallele Datenverarbeitung (1)
- Paralleles Rechnen (1)
- Parallelization (1)
- Parallelized algorithm (1)
- Parallelrechner (1)
- Parameterized Complexity (1)
- Parametrisierte Komplexität (1)
- Parsing (1)
- Patientenermündigung (1)
- Pattern Matching (1)
- Pattern Recognition (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Peer-Review (1)
- Peer-to-Peer-Netz ; GRID computing ; Zuverlässigkeit ; Web Services ; Betriebsmittelverwaltung ; Migration (1)
- Performance (1)
- Performance Prediction (1)
- Personal Data (1)
- Personas (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Physical Science (1)
- Plant identification (1)
- Plattform-Ökosysteme (1)
- Platzierung (1)
- Point-based rendering (1)
- Policy Enforcement (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polymerase Chain Reaction Experiment (1)
- Popular matching (1)
- Posenabschätzung (1)
- PostGIS (1)
- Pre-RS Traceability (1)
- Prediction Game (1)
- Predictive Models (1)
- Preprocessing (1)
- Primary informatics (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy Protection (1)
- Probabilistische Modelle (1)
- Problem solving (1)
- Problem solving strategies (1)
- Probleme in der Studie (1)
- Problemlösen (1)
- Problemlösung (1)
- Process Enactment (1)
- Process Execution (1)
- Process Modelling (1)
- Process modeling (1)
- Professoren (1)
- Prognosen (1)
- Programmierabstraktionen (1)
- Programmierausbildung (1)
- Programmiererlebnis (1)
- Programmierkonzepte (1)
- Programmierwerkzeuge (1)
- Programming Languages (1)
- Programming environments for children (1)
- Programming learning (1)
- Projekte (1)
- Prolog (1)
- Proof Theory (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Protocols (1)
- Prototyping (1)
- Prozess Verbesserung (1)
- Prozess- und Datenintegration (1)
- Prozessarchitektur (1)
- Prozessausführung (1)
- Prozessautomatisierung (1)
- Prozesse (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessmodell (1)
- Prozessmodelle (1)
- Prozessmodellsuche (1)
- Prozessoren (1)
- Prozessverfeinerung (1)
- Prädiktionsspiel (1)
- Präferenzen (1)
- Präsentation (1)
- Psychotherapie (1)
- Pytho n (1)
- Quanten-Computing (1)
- Quantenkryptographie (1)
- Quantified Boolean Formula (QBF) (1)
- Quantitative Analysen (1)
- Quantitative Modeling (1)
- Quantitative Modellierung (1)
- Query (1)
- Query execution (1)
- Query optimization (1)
- Query-Optimierung (1)
- Queuing Theory (1)
- RL (1)
- RT_PREEMT patch (1)
- RT_PREEMT-Patch (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Re-Engineering (1)
- Real-Time Rendering (1)
- Realzeitsysteme (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Reconfigurable (1)
- Region of Interest (1)
- Regressionstests (1)
- Rekonfiguration (1)
- Relational data (1)
- Rendering (1)
- Reparatur (1)
- Representationlernen (1)
- Reproducible benchmarking (1)
- Research Projects (1)
- Resource Allocation (1)
- Resource Management (1)
- Ressourcenmanagement (1)
- Reverse Engineering (1)
- Reversibility (1)
- Robot personality (1)
- Ruby (1)
- Run time analysis (1)
- Runtime Binding (1)
- Runtime improvement (1)
- Runtime-monitoring (1)
- Russia (1)
- SCED (1)
- SIEM (1)
- SMT (1)
- SOA (1)
- SOA Security (1)
- SOA Security Pattern (1)
- SOA Sicherheit (1)
- SPARQL (1)
- SSO (1)
- STEM (1)
- SWIRL (1)
- Sammlungsdatentypen (1)
- Sample Selection Bias (1)
- Savanne (1)
- Scalability (1)
- Scale-invariant feature transform (SIFT) (1)
- Scene graph systems (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schema-Entdeckung (1)
- Schemaentdeckung (1)
- Schlüsselkompetenzen (1)
- Schlüsselentdeckung (1)
- Schlüsselkompetenzen (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Scientific understanding of Information (1)
- Scrollytelling (1)
- Search Algorithms (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security (1)
- Security Modelling (1)
- Segmentierung (1)
- Selbst-Adaptive Software (1)
- Selektion (1)
- Selektionsbias (1)
- Self-Adaptive Software (1)
- Self-Checking Circuits (1)
- Self-Regulated Learning (1)
- Semantic Search (1)
- Semantik Web (1)
- Semantische Analyse (1)
- Semantische Suche (1)
- Seminarkonzept (1)
- Sensors (1)
- Sequential anomaly (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service Creation (1)
- Service Delivery Platform (1)
- Service Provider (1)
- Service convergence (1)
- Service-oriented Architectures (1)
- Service-orientierte Systeme (1)
- Service-orientierte Systme (1)
- Shader (1)
- Sharing (1)
- Sichere Digitale Identitäten (1)
- Sicherheitsanalyse (1)
- Sicherheitsmodellierung (1)
- Signal Processing (1)
- Signal processing (1)
- Signalflankengraph (SFG oder STG) (1)
- Signalquellentrennung (1)
- Signaltrennung (1)
- Similarity Measures (1)
- Similarity Search (1)
- Simulation (1)
- Simulations (1)
- Simultane Diagonalisierung (1)
- Single Sign On (1)
- Single Trial Analysis (1)
- Single event upsets (1)
- Single-Sign-On (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skelettberechnung (1)
- Skript-Entwicklungsumgebungen (1)
- Skriptsprachen (1)
- Small Private Online Courses (1)
- Smart cities (1)
- SoaML (1)
- Social impact (1)
- Sociotechnical Design (1)
- Software (1)
- Software architecture (1)
- Software-Evolution (1)
- Software-Testen (1)
- Software/Hardware Co-Design (1)
- Softwareanalyse (1)
- Softwareentwicklungsprozesse (1)
- Softwareproduktlinien (1)
- Softwaretechnik (1)
- Softwaretest (1)
- Softwaretests (1)
- Softwarevisualisierung (1)
- Softwarewartung (1)
- Solution Space (1)
- Soziale Medien (1)
- Sozialen Medien (1)
- Spaltenlayout (1)
- Spam (1)
- Spam Filtering (1)
- Spam-Erkennung (1)
- Spam-Filter (1)
- Spam-Filtering (1)
- Spatio-Spectral Filter (1)
- Spawning (1)
- Specification (1)
- Speicheroptimierungen (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spieldynamik (1)
- Spieldynamiken (1)
- Sprachdesign (1)
- Sprachlernen im Limes (1)
- Sprachspezifikation (1)
- Squeak (1)
- Squeak/Smalltalk (1)
- Stable marriage (1)
- Stable matching (1)
- Stadtmodell (1)
- Stance Detection (1)
- Standardisierung (1)
- Standards (1)
- Static Analysis (1)
- Statistical Tests (1)
- Statistikprogramm R (1)
- Statistische Tests (1)
- Stilisierung (1)
- Strategie (1)
- Structuring (1)
- Strukturierung (1)
- Strukturverbesserung (1)
- Student Engagement (1)
- Studentenerwartungen (1)
- Studentenhaltungen (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studienabbruch (1)
- Studienanfänger*innen (1)
- Studiendauer (1)
- Studieneingangsphase (1)
- Studiengestaltung (1)
- Studiengänge (1)
- Studienverläufe (1)
- Studierendenperformance (1)
- Studium (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Suche (1)
- Suchtberatung und -therapie (1)
- Suchverfahren (1)
- Synchronisation (1)
- Synonyme (1)
- Synthese (1)
- System Biologie (1)
- System of Systems (1)
- System structure (1)
- Systembiologie (1)
- Systeme von Systemen (1)
- Systementwurf (1)
- Systems of Systems (1)
- Systems of parallel communicating (1)
- Szenengraph (1)
- TPTP (1)
- Tableaumethode (1)
- Tasks (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Telekommunikation (1)
- Telemedizin (1)
- Temporal Logic (1)
- Temporäre Anbindung (1)
- Terminologische Logik (1)
- Terminology (1)
- Test (1)
- Test-getriebene Fehlernavigation (1)
- Testen (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- Tests (1)
- Text mining (1)
- Texterkennung (1)
- Textklassifikation (1)
- The Sharing Economy (1)
- Theoretischen Vorlesungen (1)
- Threshold Cryptography (1)
- Time Augmented Petri Nets (1)
- Time series (1)
- Tool (1)
- Tools (1)
- Traceability (1)
- Tracking (1)
- Trajectories (1)
- Trajektorien (1)
- Trajektoriendaten (1)
- Transaktionen (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transversal hypergraph (1)
- Travis CI (1)
- Treewidth (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Trust Management (1)
- Type and effect systems (1)
- Umfrage (1)
- Unabhängige Komponentenanalyse (1)
- Unbegrenzter Zustandsraum (1)
- Uncanny valley (1)
- Unique column combination (1)
- Unique column combinations (1)
- Universität Bagdad (1)
- Universität Potsdam (1)
- Universitätseinstellungen (1)
- Untere Schranken (1)
- Unterricht mit digitalen Medien (1)
- Unveränderlichkeit (1)
- Unvollständigkeit (1)
- Usage Interest (1)
- VGG16 (1)
- VIL (1)
- VM Integration (1)
- VR (1)
- VUCA-World (1)
- Validation (1)
- Value network (1)
- Verbindungsnetzwerke (1)
- Verbundwerte (1)
- Verhaltensabstraktion (1)
- Verhaltensanalyse (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verletzung Auflösung (1)
- Verletzung Erklärung (1)
- Vernetzte Daten (1)
- Versionierung (1)
- Verteiltes Arbeiten (1)
- Verteiltes Rechnen (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verteilungsunterschied (1)
- Vertrauen (1)
- Verwaltung von Rechenzentren (1)
- Verzögerungs-Verbreitung (1)
- Veränderungsanalyse (1)
- Videoanalyse (1)
- Videometadaten (1)
- Violation Explanation (1)
- Violation Resolution (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machines (1)
- Virtual machines (1)
- Virtuelle Maschine (1)
- Virtuelle Realität (1)
- Virtuelles 3D Stadtmodell (1)
- Visualisierungskonzept-Exploration (1)
- Vocabulary (1)
- Vocational Education (1)
- Vorhersagemodelle (1)
- Vorkenntnisse (1)
- Vorwissen (1)
- W[3]-Completeness (1)
- Wahrnehmung (1)
- Wahrnehmung von Arousal (1)
- Wahrnehmungsunterschiede (1)
- Warteschlangentheorie (1)
- Wartung von Graphdatenbanksichten (1)
- Wartung von Lehrveranstaltungen (1)
- Wearable (1)
- Web Services (1)
- Web Sites (1)
- Web applications (1)
- Web of Data (1)
- Web-Anwendungen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Webseite (1)
- Weighted clustering coefficient (1)
- Weiterbildung (1)
- Well-structuredness (1)
- Werbung (1)
- Werkzeugbau (1)
- Wertschöpfungskooperation (1)
- WhatsApp (1)
- Wicked Problems (1)
- Wikipedia (1)
- Wirtschaftsinformatik (1)
- Wissenschaftliches Arbeiten (1)
- Wissensrepräsentation und -verarbeitung (1)
- Wissensrepräsentation und Schlussfolgerung (1)
- Wohlstrukturiertheit (1)
- Wolke (1)
- Women and IT (1)
- Workflow (1)
- Wüstenbildung (1)
- X-ray imaging (1)
- XM (1)
- Young People (1)
- ZQSA (1)
- ZQSAT (1)
- Zebris (1)
- Zeitbehaftete Petri Netze (1)
- Zero-Suppressed Binary Decision Diagram (ZDD) (1)
- Zugriffskontrolle (1)
- Zuverlässigkeitsanalyse (1)
- acceptability (1)
- access control (1)
- action and change (1)
- action language (1)
- activity instance state propagation (1)
- acyclic preferences (1)
- ad hoc learning (1)
- ad hoc messaging network (1)
- adaptiv (1)
- addiction care (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol size distribution (1)
- agil (1)
- airbnb (1)
- algorithm (1)
- algorithm configuration (1)
- algorithm schedules (1)
- algorithm scheduling (1)
- algorithm selection (1)
- analog-to-digital conversion (1)
- analogical thinking (1)
- analysis (1)
- animated PCA (1)
- animierte PCA (1)
- anisotropic Kuwahara filter (1)
- annotation (1)
- anomalies (1)
- answer (1)
- answer set (1)
- app (1)
- application virtualization (1)
- approximate joint diagonalization (1)
- approximation (1)
- apriori (1)
- apt (1)
- architectural adaptation (1)
- architecture recovery (1)
- archive analysis (1)
- argument mining (1)
- argumentation research (1)
- argumentation structure (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arousal perception (1)
- art analysis (1)
- artificial intelligence (1)
- aspect adapter (1)
- aspect oriented programming (1)
- aspect-oriented (1)
- aspects (1)
- aspectualization (1)
- asset management (1)
- assistive Technologien (1)
- assistive technologies (1)
- association rule mining (1)
- asynchronous circuit (1)
- attacks (1)
- augmented reality (1)
- ausführbare Semantiken (1)
- automata (1)
- automated planning (1)
- automated theorem proving (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- automotive electronics (1)
- autonomous (1)
- back-in-time (1)
- balance analysis (1)
- bank (1)
- basic cloud storage services (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- behavioral specification (1)
- behaviour (1)
- behaviourally correct learning (1)
- benchmark (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- betriebliche Weiterbildungspraxis (1)
- bidirectional optimality theory (1)
- bild (1)
- bildbasiertes Rendering (1)
- binary representation (1)
- binary search (1)
- bio-computing (1)
- bioinformatics (1)
- biological network (1)
- biological network model (1)
- biological networks (1)
- biomarker detection (1)
- biometrics (1)
- bisimulation (1)
- bitcoin (1)
- blind source separation (1)
- bottom–up (1)
- bounded backward model checking (1)
- bpm (1)
- brand ambassadors (1)
- brand personality (1)
- bug tracking (1)
- building models (1)
- built–in predicates (1)
- business informatics (1)
- business models (1)
- business process architecture (1)
- business process architectures (1)
- business process model abstraction (1)
- business process modeling (1)
- bystander (1)
- cancer therapy (1)
- cartographic design (1)
- case study (1)
- categories (1)
- causal AI (1)
- causal reasoning (1)
- causality (1)
- center dot Computing (1)
- change detection (1)
- change management (1)
- changeability (1)
- changing the study field (1)
- changing the university (1)
- choreographies (1)
- circuits (1)
- classes of logic programs (1)
- classifier calibration (1)
- classroom language (1)
- clause elimination (1)
- clause learning (1)
- cleansing (1)
- cloud datacenter (1)
- cloud storage (1)
- cluster-analysis (1)
- code generation (1)
- coding and information theory (1)
- cogeneration units (1)
- cognition (1)
- cognitive load (1)
- cognitive load theory (1)
- cognitive modifiability (1)
- coherence-enhancing filtering (1)
- collaborations (1)
- collection types (1)
- columnar databases (1)
- combined task and motion planning (1)
- communication (1)
- community (1)
- competence development (1)
- competency (1)
- complex optimization (1)
- complexity dichotomy (1)
- composite service (1)
- compositional analysis (1)
- computational biology (1)
- computational ethnomusicology (1)
- computational methods (1)
- computational photography (1)
- computed tomography (1)
- computer science education (CSE) (1)
- computer science teachers (1)
- computer-aided design (1)
- computer-mediated therapy (1)
- computergestützte Methoden (1)
- computergestützte Musikethnologie (1)
- computervermittelte Therapie (1)
- computing (1)
- computing science education (1)
- concept of algorithm (1)
- concurrency (1)
- concurrent graph rewriting (1)
- conditions (1)
- confidentiality (1)
- conflicts and dependencies in (1)
- confluence (1)
- conformance analysis (1)
- conformance checking (1)
- connection calculus (1)
- consensus protocols (1)
- consistency restoration (1)
- consistent learning (1)
- constraint (1)
- constraint programming (1)
- constraints (1)
- constructionism (1)
- consumer behavior (1)
- context awareness (1)
- continuous testing (1)
- contract (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- corporate nomadism (1)
- corporate takeovers (1)
- corpus study (1)
- couple reaction (1)
- coupling relationship (1)
- course timetabling (1)
- creativity (1)
- crochet (1)
- crosscutting wrappers (1)
- cryptocurrency exchanges (1)
- cryptography (1)
- cryptology (1)
- cs4fn (1)
- cscw (1)
- cultural heritage (1)
- cumulative culture (1)
- curriculum theory (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyber-physikalische Systeme (1)
- cybersecurity (1)
- cyberwar (1)
- data center management (1)
- data assimilation (1)
- data center management (1)
- data correctness checking (1)
- data dependencies (1)
- data driven approaches (1)
- data extraction (1)
- data flow correctness (1)
- data in business processes (1)
- data migration (1)
- data modeling (1)
- data models (1)
- data objects (1)
- data pipeline (1)
- data requirements (1)
- data science (1)
- data security (1)
- data set (1)
- data sharing (1)
- data states (1)
- data structures and information theory (1)
- data synthesis (1)
- data transformation (1)
- data view (1)
- data visualization (1)
- data-driven (1)
- data-driven artifacts (1)
- database (1)
- database optimization (1)
- database technology (1)
- database tuning (1)
- datengetrieben (1)
- dbms (1)
- deadline propagation (1)
- decentral identities (1)
- decision management (1)
- decision mining (1)
- decision models (1)
- decision trees (1)
- decubitus (1)
- deductive databases (1)
- deduplication (1)
- deep Gaussian processes (1)
- definiteness (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dental caries classification (1)
- dependable computing (1)
- dependencies (1)
- dependency discovery (1)
- depressive symptoms (1)
- desertification (1)
- design (1)
- design research (1)
- design space exploration (1)
- design-science research (1)
- determinism (1)
- deterministic properties (1)
- deurema modeling language (1)
- development tools (1)
- developmental systems (1)
- dezentrale Identitäten (1)
- diagnosis (1)
- didaktisches Konzept (1)
- difference of Gaussians (1)
- differential gene expression (1)
- differential privacy (1)
- diffusion (1)
- digital activism (1)
- digital interventions (1)
- digital nomadism (1)
- digital nudging (1)
- digital picture archive (1)
- digital platform openness (1)
- digital strategy (1)
- digital unterstützter Unterricht (1)
- digital workplace transformation (1)
- digitale Hochschullehre (1)
- digitale Infrastruktur für den Schulunterricht (1)
- digitales Bildarchiv (1)
- digitales Whiteboard (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- dimensional (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed computation (1)
- distributed ledger technology (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- divide and conquer (1)
- doctor-patient relationship (1)
- domain-specific modeling (1)
- drift theory (1)
- dropout (1)
- duale IT-Ausbildung (1)
- dynamic (1)
- dynamic typing (1)
- dynamic classification (1)
- dynamic consolidation (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamisch (1)
- dynamische Klassifikation (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- dynamische Umsortierung (1)
- e-Assessment (1)
- e-learning platform (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- educational timetabling (1)
- edutainment (1)
- efficiency (1)
- efficient deep learning (1)
- eindeutig (1)
- eingebettete Systeme (1)
- elections (1)
- electrical muscle stimulation (1)
- electronic health record (1)
- electronic tool integration (1)
- elektrische Muskelstimulation (1)
- elliptic complexes (1)
- email spam detection (1)
- embedded systems (1)
- embedded-systems (1)
- emotion (1)
- emotion representation (1)
- emotion research (1)
- emotional design (1)
- empirical studies (1)
- empirische Studien (1)
- endpoint security (1)
- energy efficiency (1)
- energy savings (1)
- engaged computing (1)
- engine (1)
- engineering (1)
- enterprise search (1)
- entity alignment (1)
- entity linking (1)
- entity resolution (1)
- enumeration (1)
- environments (1)
- epistemic logic programs (1)
- epistemic specifications (1)
- equality (1)
- erfahrbare Medien (1)
- error correction (1)
- error detection (1)
- erzeugende gegnerische Netzwerke (1)
- ethics (1)
- event abstraction (1)
- events (1)
- evidence theory (1)
- evolution (1)
- evolution in MDE (1)
- evolutionary computation (1)
- evolving systems (1)
- exact simulation methods (1)
- executable semantics (1)
- experience (1)
- experience report (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- explicit knowledge (1)
- explicit negation (1)
- exploration (1)
- exploratives Programmieren (1)
- exponentiation (1)
- expression (1)
- extend (1)
- extensions of logic programs (1)
- external knowledge bases (1)
- external memory algorithms (1)
- fMRI (1)
- face tracking (1)
- facial expression (1)
- failure model (1)
- fatty acid amide hydrolase (1)
- fault injection (1)
- federated industrial platform ecosystems (1)
- feedback loop modeling (1)
- feedback loops (1)
- fehlende Daten (1)
- field-programmable gate array (1)
- file structure (1)
- flow-based bilateral filter (1)
- font engineering (1)
- font rendering (1)
- forecasts (1)
- forensics (1)
- formal framework (1)
- formal languages (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- formales Framework (1)
- formalism (1)
- forschendes Lernen (1)
- forschungsorientiertes Lernen (1)
- fortschrittliche Angriffe (1)
- forward / backward chaining (1)
- freie Daten (1)
- freie Software (1)
- fun (1)
- function symbols (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- functions (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- fächerverbindend (1)
- gait analysis algorithm (1)
- ganzheitlich (1)
- ganzzahlige lineare Optimierung (1)
- gefaltete neuronale Netze (1)
- gemischte Daten (1)
- gene (1)
- gene expression matrix (1)
- gene selection (1)
- general (1)
- general education in computer science (1)
- general secondary education (1)
- generalization (1)
- generalized discrimination networks (1)
- generalized logic programs (1)
- generative adversarial networks (1)
- genome annotation (1)
- geometry generation (1)
- geovirtual environments (1)
- geovirtuelle Umgebungen (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- global constraints (1)
- global model management (1)
- globale Constraints (1)
- globales Modellmanagement (1)
- grammar inference (1)
- grammars (1)
- graph clustering (1)
- graph databases (1)
- graph inference (1)
- graph languages (1)
- graph mining (1)
- graph pattern matching (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- graph-based ranking (1)
- graph-search (1)
- graph-transformations (1)
- hardware accelerator (1)
- hardware architecture (1)
- hardware-software-codesign (1)
- hate speech detection (1)
- health care (1)
- healthcare (1)
- heterogeneity (1)
- heterogeneous computing (1)
- heterogeneous tissue (1)
- heterogenes Rechnen (1)
- heuristics (1)
- high school (1)
- higher (1)
- history-aware runtime models (1)
- holistic (1)
- home office (1)
- homogeneous cell population (1)
- homomorphic encryption (1)
- human-centered (1)
- human–computer interaction (1)
- hybrid graph-transformation-systems (1)
- hybrid systems (1)
- hybride Graph-Transformations-Systeme (1)
- hyrise (1)
- identity broker (1)
- image (1)
- image captioning (1)
- image data analysis (1)
- image-based rendering (1)
- imdb (1)
- immediacy (1)
- immersion (1)
- immutable values (1)
- in-memory (1)
- in-memory data management (1)
- in-memory database (1)
- inclusion dependency (1)
- incompleteness (1)
- inconsistency (1)
- incremental graph query evaluation (1)
- incumbent (1)
- independent component analysis (1)
- index (1)
- individuals (1)
- individuelle Lernwege (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inertial measurement unit (1)
- inference (1)
- informal and formal learning (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information diffusion (1)
- informatische Allgemeinbildung (1)
- informatische Grundkompetenzen (1)
- infrastructure (1)
- inkrementelle Ausführung von Graphanfragen (1)
- inkrementelles Graph Pattern Matching (1)
- innovation capabilities (1)
- innovation management (1)
- input accuracy (1)
- instruction (1)
- integer linear programming (1)
- integral equation (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- interaction (1)
- interaction modeling (1)
- interaction techniques (1)
- interactive course (1)
- interactive media (1)
- interactive simulation (1)
- interactive workshop (1)
- interaktive Medien (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interface (1)
- international comparison (1)
- international human rights (1)
- international humanitarian law (1)
- international study (1)
- interpretable machine learning (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intransitivity (1)
- intuition (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- invasive aspects (1)
- invention (1)
- invention mechanism (1)
- inverse ill-posed problem (1)
- inverse scattering (1)
- iteration method (1)
- iterative regularization (1)
- job-shop scheduling (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- kausale KI (1)
- kausale Schlussfolgerung (1)
- kernel PCA (1)
- kernel methods (1)
- key competences in physical computing (1)
- key discovery (1)
- kinaesthetic teaching (1)
- klinisch-praktischer Unterricht (1)
- knowledge building (1)
- knowledge discovery (1)
- knowledge engineering (1)
- knowledge management system (1)
- knowledge representation (1)
- knowledge transfer (1)
- knowledge work (1)
- kompositionale Analyse (1)
- konsistentes Lernen (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- konvergente Dienste (1)
- kulturelles Erbe (1)
- labour union education (1)
- landmarks (1)
- language design (1)
- language learning in the limit (1)
- language specification (1)
- laser remote sensing (1)
- laserscanning (1)
- law and technology (1)
- leadership (1)
- leanCoP (1)
- learner characteristics (1)
- learning factory (1)
- lebenszentriert (1)
- left recursion (1)
- lesson (1)
- level-replacement systems (1)
- life-centered (1)
- linear code (1)
- linear programming problem (1)
- linearer Code (1)
- linguistic (1)
- link discovery (1)
- linked data (1)
- literature review (1)
- live migration (1)
- lively kernel (1)
- load balancing (1)
- localization (1)
- location-based (1)
- logic (1)
- logic programming methodology and applications (1)
- logic synthesis (1)
- logical calculus (1)
- logical signaling networks (1)
- logische Ergänzung (1)
- logische Programmierung (1)
- logische Signalnetzwerke (1)
- long-term interaction (1)
- loop formulas (1)
- machine (1)
- machine learning algorithms (1)
- machines (1)
- main memory computing (1)
- malware detection (1)
- management (1)
- mandatory computer science foundations (1)
- manipulation planning (1)
- manufacturing (1)
- many-core (1)
- map reduce (1)
- map/reduce (1)
- maps (1)
- market study (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- maschninelles Lernen (1)
- matrices (1)
- media (1)
- mediated conversation (1)
- mediated learning experience (1)
- medical (1)
- medical documentation (1)
- medical malpractice (1)
- medizinisch (1)
- medizinische Dokumentation (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- memory optimization (1)
- menschenzentriert (1)
- meta model (1)
- meta-programming (1)
- metabolic network (1)
- metabolite profile (1)
- metacrate (1)
- metadata (1)
- metadata detection (1)
- metadata discovery (1)
- metadata quality (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- microcredential (1)
- microdissection (1)
- migration (1)
- misconception (1)
- misconceptions (1)
- mixed data (1)
- mixture models (1)
- mmdb (1)
- mobile application (1)
- mobile applications (1)
- mobile devices (1)
- mobile learning (1)
- mobile technologies and apps (1)
- mobiles Lernen (1)
- model generation (1)
- model repair (1)
- model-based (1)
- model-based prototyping (1)
- model-driven (1)
- model-driven architecture (1)
- model-driven software engineering (1)
- modelgetriebene Entwicklung (1)
- modellgetriebene Softwaretechnik (1)
- modular counting (1)
- modularity (1)
- molecular network (1)
- molecular networks (1)
- molecular tumor board (1)
- molekulare Netzwerke (1)
- monetary incentive delay task (1)
- mood (1)
- morphic (1)
- morphological analysis (1)
- multi core data processing (1)
- multi factor authentication (1)
- multi-class classification (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-valued logic (1)
- multi-version models (1)
- multi-family residential buildings (1)
- multidisziplinäre Teams (1)
- multimedia learning (1)
- multimodal representations (1)
- multiuser (1)
- musical scales (1)
- musikalische Tonleitern (1)
- mutli-task learning (1)
- mutual gaze (1)
- mutual information (1)
- named entity mining (1)
- natural language processing (1)
- nested application conditions (1)
- nested expressions (1)
- network (1)
- network protocols (1)
- networks-on-chip (1)
- neue Online-Fehlererkennungsmethode (1)
- neural (1)
- new technologies (1)
- nicht-parametrische bedingte Unabhängigkeitstests (1)
- nichtlineare ICA (1)
- nichtlineare PCA (NLPCA) (1)
- nichtlineare Projektionen (1)
- non-monotonic reasoning (1)
- non-parametric conditional independence testing (1)
- non-photorealistic rendering (NPR) (1)
- nonlinear ICA (1)
- nonlinear PCA (NLPCA) (1)
- nonlinear projections (1)
- notation (1)
- novelty detection (1)
- nutzergenerierte Inhalte (1)
- nvm (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objective difficulty (1)
- objektorientiertes Programmieren (1)
- omega (1)
- on-chip (1)
- online assistance (1)
- online course (1)
- online course creation (1)
- online course design (1)
- online learning (1)
- online photographs (1)
- online-learning (1)
- open innovation (1)
- open learning (1)
- open science (1)
- open science practices in information systems research (1)
- open source (1)
- open source software (1)
- operating system (1)
- optical character recognition (1)
- optimal transport (1)
- optimizations (1)
- order dependencies (1)
- organisational evolution (1)
- organizational change (1)
- orts-basiert (1)
- overcomplete ICA (1)
- packrat parsing (1)
- paper prototyping (1)
- paraconsistency (1)
- parallel (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallel execution (1)
- parallel rewriting (1)
- parallel solving (1)
- parallele Verarbeitung (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Lösen (1)
- paralleles Rechnen (1)
- parameter (1)
- parsing (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partial correlation (1)
- partial replication (1)
- partielle Anwendungsbedingungen (1)
- partielle Replikation (1)
- patent (1)
- pathways (1)
- patient empowerment (1)
- pattern recognition (1)
- pedagogy (1)
- pedestrian navigation (1)
- perception (1)
- perception differences (1)
- performance (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- personal (1)
- personal response systems (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- persönliche Informationen (1)
- pervasive learning (1)
- petri net (1)
- philosophical foundation of informatics pedagogy (1)
- phone (1)
- physical computing tools (1)
- placement (1)
- planning (1)
- platform ecosystems (1)
- platypus (1)
- policy evaluation (1)
- polyglot execution environments (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- portfolio-based solving (1)
- portrait (1)
- pose estimation (1)
- poset (1)
- power-law (1)
- pre-primary level (1)
- predictive models (1)
- preference handling (1)
- preferences (1)
- prefetching (1)
- preprocessing (1)
- presentation (1)
- primary education (1)
- primary healthcare (1)
- primary level (1)
- primary school (1)
- prime pair (1)
- primer pair design (1)
- prior knowledge (1)
- priorities (1)
- probabilistic machine learning (1)
- probabilistic models (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- probabilistisches maschinelles Lernen (1)
- problem-solving (1)
- process (1)
- process and data integration (1)
- process automation (1)
- process elicitation (1)
- process improvement (1)
- process instance (1)
- process instance grouping (1)
- process model (1)
- process model search (1)
- process modeling languages (1)
- process modelling (1)
- process models (1)
- process refinement (1)
- process scheduling (1)
- processes (1)
- processing (1)
- processor hardware (1)
- professional development (1)
- professors (1)
- profiling (1)
- program (1)
- program analysis (1)
- programming abstraction (1)
- programming experience (1)
- programming in context (1)
- programming language (1)
- programming skills (1)
- programming tools (1)
- programs (1)
- prototyping (1)
- proving (1)
- psychotherapy (1)
- public cloud storage services (1)
- public dataset (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantification protocol (1)
- quantified logics (1)
- quantile normalization (1)
- quantum computing (1)
- quantum cryptography (1)
- query matching (1)
- query optimization (1)
- querying (1)
- railway network (1)
- railways (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- rapid prototyping (1)
- raum-zeitlich (1)
- raumbezogene Straftatenanalyse (1)
- reactive (1)
- reaktive Programmierung (1)
- real-time application (1)
- real-time rendering (1)
- rechnerunterstütztes Konstruieren (1)
- recognition (1)
- recommendation (1)
- reconfigurable systems (1)
- reconfiguration (1)
- reconstruction (1)
- record linkage (1)
- recursive tuning (1)
- reflection (1)
- regression testing (1)
- regulatory networks (1)
- reinforcement learning (1)
- rekonfigurierbar (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- reliability (1)
- reliability assessment (1)
- remodularization (1)
- remote collaboration (1)
- remote sensing (1)
- remote-first (1)
- repair (1)
- reputation management (1)
- requirements engineering (1)
- research data management (1)
- resilient architectures (1)
- resource management (1)
- resource optimization (1)
- rest service (1)
- restoration (1)
- restricted parallelism (1)
- reusable aspects (1)
- reverse engineering (1)
- reversible reaction (1)
- review (1)
- reward system (1)
- robust ICA (1)
- robuste ICA (1)
- robustness (1)
- runtime adaptations (1)
- runtime behavior (1)
- runtime monitoring (1)
- räumliche Geodaten (1)
- s/t-pattern sequences (1)
- sat (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scheduling (1)
- school (1)
- schwach überwachtes maschinelles Lernen (1)
- science (1)
- scm (1)
- screening tools (1)
- scripting environments (1)
- scripting languages (1)
- scrollytelling (1)
- search plan generation (1)
- secondary computer science education (1)
- secondary education (1)
- security analytics (1)
- security chaos engineering (1)
- security policies (1)
- security risk assessment (1)
- segmentation (1)
- selbst-souveräne Identitäten (1)
- selbstbestimmte Identitäten (1)
- selbstprüfende Schaltungen (1)
- selbstüberwachtes Lernen (1)
- self-adaptive multiprocessing system (1)
- self-adaptive software (1)
- self-disclosure (1)
- self-efficacy (1)
- self-healing (1)
- self-supervised learning (1)
- semantic analysis (1)
- semantic classification (1)
- semantic web services (1)
- semantics (1)
- semantics preservation (1)
- semantische Klassifizierung (1)
- semantisches Netz (1)
- sentiment (1)
- sentiment analysis (1)
- sequence properties (1)
- serialization (1)
- series (1)
- serverseitiges 3D-Rendering (1)
- serverside 3D rendering (1)
- service mediation (1)
- service orchestration (1)
- service-oriented (1)
- service-oriented architectures (1)
- serviceorientierte Architekturen (1)
- sets (1)
- shader (1)
- sharing economy (1)
- sign language (1)
- signal processing (1)
- signal transition graph (1)
- significant edge (1)
- similarity (1)
- similarity learning (1)
- similarity measures (1)
- single event upset (1)
- single-case experimental design (1)
- situated learning (1)
- situational awareness (1)
- skeletonization (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- small talk (1)
- smartphone (1)
- smoother (1)
- social attraction (1)
- social media analysis (1)
- social networking (1)
- social networking sites (1)
- software (1)
- software analysis (1)
- software architecture (1)
- software development processes (1)
- software evolution (1)
- software maintenance (1)
- software product lines (1)
- software selection (1)
- software testing (1)
- software tests (1)
- software visualization (1)
- software/hardware co-design (1)
- solar particle event (1)
- sorting (1)
- space missions (1)
- spaltenorientierte Datenbanken (1)
- spatio-temporal (1)
- spatio-temporal data management (1)
- spatio-temporal sensor data (1)
- specific prime pair (1)
- specification of timed graph transformations (1)
- speed independence (1)
- speed independent (1)
- spread correction (1)
- spreadsheets (1)
- squeak (1)
- stable matching (1)
- stable model semantics (1)
- standards (1)
- stark verhaltenskorrekt sperrend (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- statistics program R (1)
- stochastic process (1)
- stratification (1)
- strong and uniform equivalence (1)
- strongly behaviourally correct locking (1)
- strongly stable matching (1)
- structured output prediction (1)
- strukturierte Vorhersage (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- studentische Forschung (1)
- students’ conceptions (1)
- students’ knowledge (1)
- study (1)
- study problems (1)
- style transfer (1)
- stylization (1)
- super stable matching (1)
- survey mode (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synonym discovery (1)
- system of systems (1)
- systems (1)
- t.BPM (1)
- tabellarische Dateien (1)
- tableau method (1)
- tabular data (1)
- tacit knowledge (1)
- tangible media (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teachers (1)
- teaching (1)
- teaching informatics in general education (1)
- teaching material (1)
- technical notes and rapid communications (1)
- technische Rahmenbedingungen (1)
- technologies (1)
- technology (1)
- tele-lab (1)
- tele-teaching (1)
- telemedicine (1)
- temporal graph queries (1)
- temporal logic (1)
- temporale Graphanfragen (1)
- temporary binding (1)
- terminology (1)
- terrain models (1)
- test (1)
- test case prioritization (1)
- test items (1)
- test results (1)
- test-driven fault navigation (1)
- text classification (1)
- text mining (1)
- the bright and dark side of social media in the marginalized contexts (1)
- theory (1)
- threat detection (1)
- threshold cryptography (1)
- tiefe Gauß-Prozesse (1)
- tiering (1)
- tool building (1)
- top– down (1)
- tort law (1)
- touch input (1)
- tptp (1)
- tracing (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- traditionelle Unternehmen (1)
- training (1)
- trajectories (1)
- trajectory data (1)
- transduction (1)
- transfer learning (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- trust model (1)
- tuple spaces (1)
- tutorial section (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- uncanny valley (1)
- unferring cellular networks (1)
- unfounded sets (1)
- unification (1)
- unique (1)
- unique column combinations (1)
- unsupervised (1)
- unsupervised learning (1)
- unsupervised methods (1)
- user interfaces (1)
- user-centred (1)
- value co-creation (1)
- variables (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- various applications (1)
- ventral striatum (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- versioning (1)
- verteilte Berechnung (1)
- verteilte Datenbanken (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- video analysis (1)
- video metadata (1)
- view maintenance (1)
- views (1)
- virtual (1)
- virtual 3D city model (1)
- virtual collaboration (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtual learning environments (1)
- virtual machine (1)
- virtual mobility (1)
- virtual teams (1)
- virtualisierte IT-Infrastruktur (1)
- virtuell (1)
- virtuelle Realität (1)
- visual analytics (1)
- visual language (1)
- visual languages (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Sprachen (1)
- vulnerabilities (1)
- weak supervision (1)
- weakly (1)
- web application (1)
- web services (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- weight (1)
- well-being (1)
- wissenschaftliches Arbeiten (1)
- wissenschaftliches Schreiben (1)
- word order freezing (1)
- word sense disambiguation (1)
- workload prediction (1)
- zero-day (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Änderbarkeit (1)
- Übereinstimmungsanalyse (1)
- Überwachung (1)
- öffentliche Cloud Speicherdienste (1)
- überbestimmte ICA (1)
- überprüfbare Nachweise (1)
- ‘unplugged’ computing (1)
Institute
- Institut für Informatik und Computational Science (271)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (215)
- Hasso-Plattner-Institut für Digital Engineering GmbH (134)
- Extern (65)
- Fachgruppe Betriebswirtschaftslehre (29)
- Mathematisch-Naturwissenschaftliche Fakultät (24)
- Wirtschaftswissenschaften (19)
- Institut für Mathematik (16)
- Bürgerliches Recht (12)
- Institut für Physik und Astronomie (8)
A deterministic cycle scheduling of partitions at the operating system level is supposed for a multiprocessor system. In this paper, we propose a tool for generating such schedules. We use constraint based programming and develop methods and concepts for a combined interactive and automatic partition scheduling system. This paper is also devoted to basic methods and techniques for modeling and solving this partition scheduling problem. Initial application of our partition scheduling tool has proved successful and demonstrated the suitability of the methods used.
This thesis presents methods for automated synthesis of flexible chip multiprocessor systems from parallel programs targeted at FPGAs to exploit both task-level parallelism and architecture customization. Automated synthesis is necessitated by the complexity of the design space. A detailed description of the design space is provided in order to determine which parameters should be modeled to facilitate automated synthesis by optimizing a cost function, the emphasis being placed on inclusive modeling of parameters from application, architectural and physical subspaces, as well as their joint coverage in order to avoid pre-constraining the design space. Given a parallel program and a set of an IP library, the automated synthesis problem is to simultaneously (i) select processors (ii) map and schedule tasks to them, and (iii) select one or several networks for inter-task communications such that design constraints and optimization objectives are met. The research objective in this thesis is to find a suitable model for automated synthesis, and to evaluate methods of using the model for architectural optimizations. Our contributions are a holistic approach for the design of such systems, corresponding models to facilitate automated synthesis, evaluation of optimization methods using state of the art integer linear and answer set programming, as well as the development of synthesis heuristics to solve runtime challenges.
An important characteristic of Service-Oriented Architectures is that clients do not depend on the service implementation's internal assignment of methods to objects. It is perhaps the most important technical characteristic that differentiates them from more common object-oriented solutions. This characteristic makes clients and services malleable, allowing them to be rearranged at run-time as circumstances change. That improvement in malleability is impaired by requiring clients to direct service requests to particular services. Ideally, the clients are totally oblivious to the service structure, as they are to aspect structure in aspect-oriented software. Removing knowledge of a method implementation's location, whether in object or service, requires re-defining the boundary line between programming language and middleware, making clearer specification of dependence on protocols, and bringing the transaction-like concept of failure scopes into language semantics as well. This paper explores consequences and advantages of a transition from object-request brokering to service-request brokering, including the potential to improve our ability to write more parallel software.
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
Background: For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues.
Results: Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available.
Conclusions: The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in realistically noisy conditions and with moderate sample sizes.
Die Evaluierung von Lehrveranstaltungen hat in vielen Lehreinrichtungen eine lange Tradition. In diesen klassischen Evaluierungsszenarien werden einmalig pro Semester Umfragebögen an die Studierenden verteilt und anschließend manuell ausgewertet. Die Ergebnisse sind dann zumeist am Ende der Vorlesungszeit vorhanden und geben einen punktuellen Einblick in die Qualität der Lehrveranstaltung bis zum Zeitpunkt der durchgeführten Evaluation. In diesem Artikel stellen wir das Konzept des Rapid Feedback, seine Einsatzmöglichkeiten in universitären Lehrveranstaltungen und eine prototypische Integration in eine koaktive Lern- und Arbeitsumgebung vor.
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes.
STG decomposition is a promising approach to tackle the complexity problems arising in logic synthesis of speed independent circuits, a robust asynchronous (i.e. clockless) circuit type. Unfortunately, STG decomposition can result in components that in isolation have irreducible CSC conflicts. Generalising earlier work, it is shown how to resolve such conflicts by introducing internal communication between the components via structural techniques only.
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize. Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns. Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
Component based software development (CBSD) and aspectoriented software development (AOSD) are two complementary approaches. However, existing proposals for integrating aspects into component models are direct transposition of object-oriented AOSD techniques to components. In this article, we propose a new approach based on views. Our proposal introduces crosscutting components quite naturally and can be integrated into different component models.
The interest in extensions of the logic programming paradigm beyond the class of normal logic programs is motivated by the need of an adequate representation and processing of knowledge. One of the most difficult problems in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the ‘intended’ partial models of generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [Prz91]. The presented preference criterion defines a partial model of a generalized logic program as intended if it is generated by a stationary chain. It turns out that the stationary generated models coincide with the stationary models on the class of normal logic programs. The general wellfounded semantics of such a program is defined as the set-theoretical intersection of its stationary generated models. For normal logic programs the general wellfounded semantics equals the wellfounded semantics.
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
We introduce a simple approach extending the input language of Answer Set Programming (ASP) systems by multi-valued propositions. Our approach is implemented as a (prototypical) preprocessor translating logic programs with multi-valued propositions into logic programs with Boolean propositions only. Our translation is modular and heavily benefits from the expressive input language of ASP. The resulting approach, along with its implementation, allows for solving interesting constraint satisfaction problems in ASP, showing a good performance.
We present the tool Kato which is, to the best of our knowledge, the first tool for plagiarism detection that is directly tailored for answer-set programming (ASP). Kato aims at finding similarities between (segments of) logic programs to help detecting cases of plagiarism. Currently, the tool is realised for DLV programs but it is designed to handle various logic-programming syntax versions. We review basic features and the underlying methodology of the tool.
In this talk, I would like to share my experiences gained from participating in four CSP solver competitions and the second ASP solver competition. In particular, I’ll talk about how various programming techniques can make huge differences in solving some of the benchmark problems used in the competitions. These techniques include global constraints, table constraints, and problem-specific propagators and labeling strategies for selecting variables and values. I’ll present these techniques with experimental results from B-Prolog and other CLP(FD) systems.
We describe a framework to support the implementation of web-based systems to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore and most important, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
Zusammenfassung: Game-based Learning und Edutainment sind aktuelle Schlagworte im Bereich der Hochschulausbildung. Zunächst verbindet man damit die Integration einer Spiel- und Spaßkultur in die herkömmlichen Lehrveranstaltungen wie Vorlesungen, Übungen, Praktika und Seminare. Die nachfolgenden Ausführungen gehen einer genaueren Begriffsanalyse nach und untersuchen, ob Game-based Learning und Edutainment tatsächlich neuartige Unterrichtsformen erfordern oder neue didaktische Überlegungen in bestehendes Unterrichtsgeschehen bringen – oder ob es nicht doch an einigen Stellen „alter Wein in neuen Schläuchen“ ist.
We present an approach that provides automatic or semi-automatic support for evolution and change management in heterogeneous legacy landscapes where (1) legacy heterogeneous, possibly distributed platforms are integrated in a service oriented fashion, (2) the coordination of functionality is provided at the service level, through orchestration, (3) compliance and correctness are provided through policies and business rules, (4) evolution and correctness-by-design are supported by the eXtreme Model Driven Development paradigm (XMDD) offered by the jABC (Margaria and Steffen in Annu. Rev. Commun. 57, 2004)—the model-driven service oriented development platform we use here for integration, design, evolution, and governance. The artifacts are here semantically enriched, so that automatic synthesis plugins can field the vision of Enterprise Physics: knowledge driven business process development for the end user.
We demonstrate this vision along a concrete case study that became over the past three years a benchmark for Semantic Web Service discovery and mediation. We enhance the Mediation Scenario of the Semantic Web Service Challenge along the 2 central evolution paradigms that occur in practice: (a) Platform migration: platform substitution of a legacy system by an ERP system and (b) Backend extension: extension of the legacy Customer Relationship Management (CRM) and Order Management System (OMS) backends via an additional ERP layer.
Preface
(2010)
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning, , interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. The 23rd WLP was held in Potsdam at September 15 – 16, 2009. The topics of the presentations of WLP2009 were grouped into the major areas: Databases, Answer Set Programming, Theory and Practice of Logic Programming as well as Constraints and Constraint Handling Rules.
Aspect-oriented middleware is a promising technology for the realisation of dynamic reconfiguration in heterogeneous distributed systems. However, like other dynamic reconfiguration approaches, AO-middleware-based reconfiguration requires that the consistency of the system is maintained across reconfigurations. AO-middleware-based reconfiguration is an ongoing research topic and several consistency approaches have been proposed. However, most of these approaches tend to be targeted at specific contexts, whereas for distributed systems it is crucial to cover a wide range of operating conditions. In this paper we propose an approach that offers distributed, dynamic reconfiguration in a consistent manner, and features a flexible framework-based consistency management approach to cover a wide range of operating conditions. We evaluate our approach by investigating the configurability and transparency of our approach and also quantify the performance overheads of the associated consistency mechanisms.
In this paper we consider a simple syntactic extension of Answer Set Programming (ASP) for dealing with (nested) existential quantifiers and double negation in the rule bodies, in a close way to the recent proposal RASPL-1. The semantics for this extension just resorts to Equilibrium Logic (or, equivalently, to the General Theory of Stable Models), which provides a logic-programming interpretation for any arbitrary theory in the syntax of Predicate Calculus. We present a translation of this syntactic class into standard logic programs with variables (either disjunctive or normal, depending on the input rule heads), as those allowed by current ASP solvers. The translation relies on the introduction of auxiliary predicates and the main result shows that it preserves strong equivalence modulo the original signature.
The difference-list technique is described in literature as effective method for extending lists to the right without using calls of append/3. There exist some proposals for automatic transformation of list programs into differencelist programs. However, we are interested in construction of difference-list programs by the programmer, avoiding the need of a transformation step. In [GG09] it was demonstrated, how left-recursive procedures with a dangling call of append/3 can be transformed into right-recursion using the unfolding technique. For simplification of writing difference-list programs using a new cons/2 procedure was introduced. In the present paper, we investigate how efficieny is influenced using cons/2. We measure the efficiency of procedures using accumulator technique, cons/2, DCG’s, and difference lists and compute the resulting speedup in respect to the simple procedure definition using append/3. Four Prolog systems were investigated and we found different behaviour concerning the speedup by difference lists. A result of our investigations is, that an often advice given in the literature for avoiding calls append/3 could not be confirmed in this strong formulation.
We propose a paraconsistent declarative semantics of possibly inconsistent generalized logic programs which allows for arbitrary formulas in the body and in the head of a rule (i.e. does not depend on the presence of any specific connective, such as negation(-as-failure), nor on any specific syntax of rules). For consistent generalized logic programs this semantics coincides with the stable generated models introduced in [HW97], and for normal logic programs it yields the stable models in the sense of [GL88].
A wide range of additional forward chaining applications could be realized with deductive databases, if their rule formalism, their immediate consequence operator, and their fixpoint iteration process would be more flexible. Deductive databases normally represent knowledge using stratified Datalog programs with default negation. But many practical applications of forward chaining require an extensible set of user–defined built–in predicates. Moreover, they often need function symbols for building complex data structures, and the stratified fixpoint iteration has to be extended by aggregation operations. We present an new language Datalog*, which extends Datalog by stratified meta–predicates (including default negation), function symbols, and user–defined built–in predicates, which are implemented and evaluated top–down in Prolog. All predicates are subject to the same backtracking mechanism. The bottom–up fixpoint iteration can aggregate the derived facts after each iteration based on user–defined Prolog predicates.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Die Studienanfänger der Informatik haben in Deutschland sehr unterschiedliche Grundkenntnisse in der Programmierung. Dies führt immer wieder zu Schwierigkeiten in der Ausrichtung der Einführungsveranstaltungen. An der TU München wird seit dem Wintersemester 2008/2009 nun eine neue Art von Vorkursen angeboten. In nur 2,5 Tagen erstellen die Teilnehmer ein kleines objektorientiertes Programm. Dabei arbeiten sie weitestgehend alleine, unterstützt von einem studentischen Tutor. In dieser Arbeit sollen nun das Konzept der sogenannten „Vorprojekte“ sowie erste Forschungsansätze vorgestellt werden
Die Möglichkeiten sich zu informieren, am Leben der vielen Anderen teilzunehmen ist durch das Internet mit seinen Tweets, Google-Angeboten und sozialen Netzwerken wie Facebook ins Unermessliche gewachsen. Zugleich fühlen sich viele Nutzer überfordert und meinen, im Meer der Informationen zu ertrinken. So bekennt Frank Schirrmacher in seinem Buch Payback, dass er den geistigen Anforderungen unserer Zeit nicht mehr gewachsen ist. Sein Kopf komme nicht mehr mit. Er sei unkonzentriert, vergesslich und ständig abgelenkt. Das, was vielen zum Problem geworden ist, sehen viele Studierende eher pragmatisch. Der Wissenserwerb in Zeiten von Internet und E-Learning läuft an Hochschulen häufig nach der Helene-Hegemann-Methode ab: Zunächst machen sich die Studierenden, z.B. im Rahmen einer Studien- oder Hausarbeit, bei Wikipedia „schlau“, ein Einstieg ist geschafft. Anschließend wird dieses Wissen mit Google angereichert. Damit ist Überblickswissen vorhanden. Mit geschickter copy-and-paste-Komposition lässt sich daraus schon ein „Werk“ erstellen. Der ein oder andere Studierende gibt sich mit diesem Wissenserwerb zufrieden und bricht seinen Lernprozess hier bereits ab. Nun ist zwar am Ende jeder Studierende für seinen Wissenserwerb selbst verantwortlich. Die erkennbar unbefriedigende Situation sollte die Hochschulen aber herausfordern, das Internet in Vorlesungen und Seminaren auszuprobieren und sinnvolle Anwendungen zu entwickeln. Beispiele gibt es durchaus. Unter der Metapher E-Learning hat sich ein umfangreicher Forschungsschwerpunkt an den Universitäten entwickelt. Einige Beispiele von vielen: So hat der Osnabrücker Informatik-Professor Oliver Vornberger seine Vorlesungen als Video ins Netz gestellt. Per RSS ist es möglich, Sequenzen aufs iPod zu laden. Die übliche Dozentenangst, dann würden sie ja vor leeren Bänken sitzen, scheint unbegründet. Sie werden von den Studierenden vor allem zur Prüfungsvorbereitung genutzt. Wie ist das Internet, das für die junge Generation zu einem alles andere verdrängenden Universalmedium geworden ist, didaktisch in die Hochschullehre einzubinden? Wie also ist konkret mit diesen Herausforderungen umzugehen? Dies soll uns im Folgenden beschäftigen.
Bei der Suche nach Möglichkeiten, die Weiterbildung für Informatiklehrkräfte auszubauen, bietet sich der Einsatz virtueller Lernräume an. Dieses Papier berichtet über ein Projekt, in dem ein exemplarischer virtueller Lernraum für kollaboratives Lernen in der Lehrerweiterbildung in Informatik theoriegeleitet erstellt, erprobt und bewertet wurde. Die erzielten Ergebnisse über das Nutzungsverhalten können für weitere E-Learningprojekte in der Lehrerbildung hilfreich sein. Der Schwerpunkt dieses Papiers liegt auf der Gestaltung des Lernraums unter Beachtung der speziellen Situation der Informatiklehrkräfte, nicht auf der didaktischen Aufbereitung der betreffenden Lerneinheit.
Für die Integration und den Bedarf der hochqualifizierten Migranten auf dem Arbeitsmarkt in Deutschland gibt es viele Überlegungen, aber noch keine ausreichenden Lösungen. Dieser Artikel beschreibt eine praktische Lösung über die Umsetzung des Konzepts für die Qualifizierung der akademischen Migranten am Beispiel eines Studienprogramms in Informatik an der Universität Oldenburg.
KoProV
(2010)
In der universitären Lehre ändert sich der Leitgedanke von einer qualifikationsorientierten hin zu einer kompetenzorientierten Ausbildung. Der Begriff Kompetenz lässt sich dabei grob in die fachlichen und die überfachlichen Kompetenzen unterteilen. Insbesondere die Vermittlung von Schlüsselqualifikationen hat in der Lehre von naturwissenschaftlichen Fachrichtungen nur unzureichend Einzug erhalten. Während der klassische Vorlesungsbetrieb auf den Erwerb von Fachkompetenz zielt, stoßen ausschließlich projektorientierte Veranstaltungen schnell an ihre Grenzen hinsichtlich der Teilnehmergröße oder Umfang der Lerninhalte. Um auf geeignete Art und Weise den Erwerb von überfachlichen Kompetenzen zu ermöglichen, bedarf es neuer didaktischer Konzepte, die eine engere Verknüpfung von klassischen Vorlesungen und dem projektorientierten Lernen vorsehen. In diesem Sinne versucht der skizzierte Ansatz der koordinierten Projektvorlesung(KoProV) Wissensvermittlung im Rahmen von Vorlesungseinheiten mit koordinierten Praxisphasen in Teilgruppen zu verbinden. Für eine erfolgreiche Durchführung und Erarbeitung des begleitenden Praxisprojektes durch mehrere Teilgruppen sind organisatorische und technische Randbedingungen zu beachten.
In der letzten Jahren ist die Zahl der erfolgreichen Prüfungen von Studierenden im Informatikkurs des ersten Studienjahres für verschiedene Studiengänge an der Universität Óbuda stark gesunken. Dies betrifft Prüfungen in den Teilgebieten Rechnerarchitektur, Betrieb von Peripheriegeräten, Binäre Codierung und logische Operationen, Computerviren, Computernetze und das Internet, Steganographie und Kryptographie, Betriebsysteme. Mehr als der Hälfte der Studenten konnte die Prüfungen der ersten Semester nicht erfolgreich absolvieren. Die hier vorgelegte Analyse der Studienleistungen zielt darauf ab, Gründe für diese Entwicklung zu identifizieren, die Zahl der Abbrecher zu reduzieren und die Leistungen der Studenten zu verbessern. Die Analyse zeigt, dass die Studenten die erforderlichen Lehrmaterialen erst ein bis zwei Tage vor oder sogar erst am Tag der Klausuren vom Server downloaden, so dass sie nicht mehr hinreichend Zeit zum Lernen haben. Diese Tendenz zeigt sich bei allen Teilgebieten des Studiengangs. Ein Mangel an kontinuierlicher Mitarbeit scheint einer der Gründe für ein frühes Scheitern zu sein. Ferner zeigt sich die Notwendigkeit, dass bei den Lehrangeboten in Informatik auf eine kontinuierliche Kommunikation mit den Studierenden und Rückmeldung zu aktuellen Unterrichtsinhalten zu achten ist. Dies kann durch motivierende Maßnahmen zur Teilnahme an den Übungen oder durch kleine wöchentliche schriftliche Tests geschehen.
Am 24. und 25. Juni 2010 fand am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH in Potsdam der 3. Deutsche IPv6 Gipfel 2010 statt, dessen Dokumentation der vorliegende technische Report dient. Als nationaler Arm des weltweiten IPv6-Forums fördert der Deutsche IPv6-Rat den Übergangsprozess zur neuen Internetgeneration und brachte in diesem Rahmen nationale und internationale Experten aus Wirtschaft, Wissenschaft und öffentlicher Verwaltung zusammen, um Awareness für das Zukunftsthema IPv6 zu schaffen und um ein Resumé über die bislang erzielten Fortschritte zu ziehen. Die Grenzen des alten Internetprotokolls IPv4 sind in den vergangenen zwei Jahren deutlicher denn je zutage getreten. Waren im vergangenen Jahr anlässlich des 2. IPv6 Gipfels noch 11% aller zu vergebenden IPv4 Adressen verfügbar, ist diese Zahl mittlerweile auf nur noch 6% geschrumpft. Ehrengast war in diesem Jahr der „europäische Vater“ des Internets, Prof. Peter T. Kirstein vom University College London, dessen Hauptvortrag von weiteren Beiträgen hochrangiger Vertretern aus Politik, Wissenschaft und Wirtschaft ergänzt wurde.
Data in business processes
(2011)
Prozesse und Daten sind gleichermaßen wichtig für das Geschäftsprozessmanagement. Prozessdaten sind dabei insbesondere im Kontext der Automatisierung von Geschäftsprozessen, dem Prozesscontrolling und der Repräsentation der Vermögensgegenstände von Organisationen relevant. Es existieren viele Prozessmodellierungssprachen, von denen jede die Darstellung von Daten durch eine fest spezifizierte Menge an Modellierungskonstrukten ermöglicht. Allerdings unterscheiden sich diese Darstellungenund damit der Grad der Datenmodellierung stark untereinander. Dieser Report evaluiert verschiedene Prozessmodellierungssprachen bezüglich der Unterstützung von Datenmodellierung. Als einheitliche Grundlage entwickeln wir ein Framework, welches prozess- und datenrelevante Aspekte systematisch organisiert. Die Kriterien legen dabei das Hauptaugenmerk auf die datenrelevanten Aspekte. Nach Einführung des Frameworks vergleichen wir zwölf Prozessmodellierungssprachen gegen dieses. Wir generalisieren die Erkenntnisse aus den Vergleichen und identifizieren Cluster bezüglich des Grades der Datenmodellierung, in welche die einzelnen Sprachen eingeordnet werden.
The modeling and evaluation calculus FMC-QE, the Fundamental Modeling Concepts for Quanti-tative Evaluation [1], extends the Fundamental Modeling Concepts (FMC) for performance modeling and prediction. In this new methodology, the hierarchical service requests are in the main focus, because they are the origin of every service provisioning process. Similar to physics, these service requests are a tuple of value and unit, which enables hierarchical service request transformations at the hierarchical borders and therefore the hierarchical modeling. Through reducing the model complexity of the models by decomposing the system in different hierarchical views, the distinction between operational and control states and the calculation of the performance values on the assumption of the steady state, FMC-QE has a scalable applica-bility on complex systems. According to FMC, the system is modeled in a 3-dimensional hierarchical representation space, where system performance parameters are described in three arbitrarily fine-grained hierarchi-cal bipartite diagrams. The hierarchical service request structures are modeled in Entity Relationship Diagrams. The static server structures, divided into logical and real servers, are de-scribed as Block Diagrams. The dynamic behavior and the control structures are specified as Petri Nets, more precisely Colored Time Augmented Petri Nets. From the structures and pa-rameters of the performance model, a hierarchical set of equations is derived. The calculation of the performance values is done on the assumption of stationary processes and is based on fundamental laws of the performance analysis: Little's Law and the Forced Traffic Flow Law. Little's Law is used within the different hierarchical levels (horizontal) and the Forced Traffic Flow Law is the key to the dependencies among the hierarchical levels (vertical). This calculation is suitable for complex models and allows a fast (re-)calculation of different performance scenarios in order to support development and configuration decisions. Within the Research Group Zorn at the Hasso Plattner Institute, the work is embedded in a broader research in the development of FMC-QE. While this work is concentrated on the theoretical background, description and definition of the methodology as well as the extension and validation of the applicability, other topics are in the development of an FMC-QE modeling and evaluation tool and the usage of FMC-QE in the design of an adaptive transport layer in order to fulfill Quality of Service and Service Level Agreements in volatile service based environments. This thesis contains a state-of-the-art, the description of FMC-QE as well as extensions of FMC-QE in representative general models and case studies. In the state-of-the-art part of the thesis in chapter 2, an overview on existing Queueing Theory and Time Augmented Petri Net models and other quantitative modeling and evaluation languages and methodologies is given. Also other hierarchical quantitative modeling frameworks will be considered. The description of FMC-QE in chapter 3 consists of a summary of the foundations of FMC-QE, basic definitions, the graphical notations, the FMC-QE Calculus and the modeling of open queueing networks as an introductory example. The extensions of FMC-QE in chapter 4 consist of the integration of the summation method in order to support the handling of closed networks and the modeling of multiclass and semaphore scenarios. Furthermore, FMC-QE is compared to other performance modeling and evaluation approaches. In the case study part in chapter 5, proof-of-concept examples, like the modeling of a service based search portal, a service based SAP NetWeaver application and the Axis2 Web service framework will be provided. Finally, conclusions are given by a summary of contributions and an outlook on future work in chapter 6. [1] Werner Zorn. FMC-QE - A New Approach in Quantitative Modeling. In Hamid R. Arabnia, editor, Procee-dings of the International Conference on Modeling, Simulation and Visualization Methods (MSV 2007) within WorldComp ’07, pages 280 – 287, Las Vegas, NV, USA, June 2007. CSREA Press. ISBN 1-60132-029-9.
Die öffentliche Verwaltung setzt seit mehreren Jahren E-Government-Anwendungssysteme ein, um ihre Verwaltungsprozesse intensiver mit moderner Informationstechnik zu unterstützen. Da die öffentliche Verwaltung in ihrem Handeln in besonderem Maße an Recht und Gesetz gebunden ist verstärkt und verbreitet sich der Zusammenhang zwischen den Gesetzen und Rechtsvorschriften einerseits und der zur Aufgabenunterstützung eingesetzten Informationstechnik andererseits. Aus Sicht der Softwaretechnik handelt es sich bei diesem Zusammenhang um eine spezielle Form der Verfolgbarkeit von Anforderungen (engl. Traceability), die so genannte Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (Pre-Requirements Specification Traceability, kurz Pre-RS Traceability), da sie Aspekte betrifft, die relevant sind, bevor die Anforderungen in eine Spezifikation eingeflossen sind (Ursprünge von Anforderungen). Der Ansatz dieser Arbeit leistet einen Beitrag zur Verfolgbarkeit im Vorfeld der Anforderungsspezifikation von E-Government-Anwendungssystemen. Er kombiniert dazu aktuelle Entwicklungen und Standards (insbesondere des World Wide Web Consortium und der Object Management Group) aus den Bereichen Verfolgbarkeit von Anforderungen, Semantic Web, Ontologiesprachen und modellgetriebener Softwareentwicklung. Der Lösungsansatz umfasst eine spezielle Ontologie des Verwaltungshandeln, die mit den Techniken, Methoden und Werkzeugen des Semantic Web eingesetzt wird, um in Texten von Rechtsvorschriften relevante Ursprünge von Anforderungen durch Annotationen mit einer definierten Semantik zu versehen. Darauf aufbauend wird das Ontology Definition Metamodel (ODM) verwendet, um die Annotationen als spezielle Individuen einer Ontologie auf Elemente der Unified Modeling Language (UML) abzubilden. Dadurch entsteht ein neuer Modelltyp Pre-Requirements Model (PRM), der das Vorfeld der Anforderungsspezifikation formalisiert. Modelle diesen Typs können auch verwendet werden, um Aspekte zu formalisieren die sich nicht oder nicht vollständig aus dem Text der Rechtsvorschrift ergeben. Weiterhin bietet das Modell die Möglichkeit zum Anschluss an die modellgetriebene Softwareentwicklung. In der Arbeit wird deshalb eine Erweiterung der Model Driven Architecture (MDA) vorgeschlagen. Zusätzlich zu den etablierten Modelltypen Computation Independent Model (CIM), Platform Independent Model (PIM) und Platform Specific Model (PSM) könnte der Einsatz des PRM Vorteile für die Verfolgbarkeit bringen. Wird die MDA mit dem PRM auf das Vorfeld der Anforderungsspezifikation ausgeweitet, kann eine Transformation des PRM in ein CIM als initiale Anforderungsspezifikation erfolgen, indem der MOF Query View Transformation Standard (QVT) eingesetzt wird. Als Teil des QVT-Standards ist die Aufzeichnung von Verfolgbarkeitsinformationen bei Modelltransformationen verbindlich. Um die semantische Lücke zwischen PRM und CIM zu überbrücken, erfolgt analog zum Einsatz des Plattformmodells (PM) in der PIM nach PSM Transformation der Einsatz spezieller Hilfsmodelle. Es kommen dafür die im Projekt "E-LoGo" an der Universität Potsdam entwickelten Referenzmodelle zum Einsatz. Durch die Aufzeichnung der Abbildung annotierter Textelemente auf Elemente im PRM und der Transformation der Elemente des PRM in Elemente des CIM kann durchgängige Verfolgbarkeit im Vorfeld der Anforderungsspezifikation erreicht werden. Der Ansatz basiert auf einer so genannten Verfolgbarkeitsdokumentation in Form verlinkter Hypertextdokumente, die mittels XSL-Stylesheet erzeugt wurden und eine Verbindung zur graphischen Darstellung des Diagramms (z. B. Anwendungsfall-, Klassendiagramm der UML) haben. Der Ansatz unterstützt die horizontale Verfolgbarkeit zwischen Elementen unterschiedlicher Modelle vorwärts- und rückwärtsgerichtet umfassend. Er bietet außerdem vertikale Verfolgbarkeit, die Elemente des gleichen Modells und verschiedener Modellversionen in Beziehung setzt. Über den offensichtlichen Nutzen einer durchgängigen Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (z. B. Analyse der Auswirkungen einer Gesetzesänderung, Berücksichtigung des vollständigen Kontextes einer Anforderung bei ihrer Priorisierung) hinausgehend, bietet diese Arbeit eine erste Ansatzmöglichkeit für eine Feedback-Schleife im Prozess der Gesetzgebung. Stehen beispielsweise mehrere gleichwertige Gestaltungsoptionen eines Gesetzes zur Auswahl, können die Auswirkungen jeder Option analysiert und der Aufwand ihrer Umsetzung in E-Government-Anwendungen als Auswahlkriterium berücksichtigt werden. Die am 16. März 2011 in Kraft getretene Änderung des NKRG schreibt eine solche Analyse des so genannten „Erfüllungsaufwands“ für Teilbereiche des Verwaltungshandelns bereits heute verbindlich vor. Für diese Analyse kann die vorliegende Arbeit einen Ansatz bieten, um zu fundierten Aussagen über den Änderungsaufwand eingesetzter E-Government-Anwendungssysteme zu kommen.
Biology has made great progress in identifying and measuring the building blocks of life. The availability of high-throughput methods in molecular biology has dramatically accelerated the growth of biological knowledge for various organisms. The advancements in genomic, proteomic and metabolomic technologies allow for constructing complex models of biological systems. An increasing number of biological repositories is available on the web, incorporating thousands of biochemical reactions and genetic regulations. Systems Biology is a recent research trend in life science, which fosters a systemic view on biology. In Systems Biology one is interested in integrating the knowledge from all these different sources into models that capture the interaction of these entities. By studying these models one wants to understand the emerging properties of the whole system, such as robustness. However, both measurements as well as biological networks are prone to considerable incompleteness, heterogeneity and mutual inconsistency, which makes it highly non-trivial to draw biologically meaningful conclusions in an automated way. Therefore, we want to promote Answer Set Programming (ASP) as a tool for discrete modeling in Systems Biology. ASP is a declarative problem solving paradigm, in which a problem is encoded as a logic program such that its answer sets represent solutions to the problem. ASP has intrinsic features to cope with incompleteness, offers a rich modeling language and highly efficient solving technology. We present ASP solutions, for the analysis of genetic regulatory networks, determining consistency with observed measurements and identifying minimal causes for inconsistency. We extend this approach for computing minimal repairs on model and data that restore consistency. This method allows for predicting unobserved data even in case of inconsistency. Further, we present an ASP approach to metabolic network expansion. This approach exploits the easy characterization of reachability in ASP and its various reasoning methods, to explore the biosynthetic capabilities of metabolic reaction networks and generate hypotheses for extending the network. Finally, we present the BioASP library, a Python library which encapsulates our ASP solutions into the imperative programming paradigm. The library allows for an easy integration of ASP solution into system rich environments, as they exist in Systems Biology.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
CSOM/PL is a software product line (SPL) derived from applying multi-dimensional separation of concerns (MDSOC) techniques to the domain of high-level language virtual machine (VM) implementations. For CSOM/PL, we modularised CSOM, a Smalltalk VM implemented in C, using VMADL (virtual machine architecture description language). Several features of the original CSOM were encapsulated in VMADL modules and composed in various combinations. In an evaluation of our approach, we show that applying MDSOC and SPL principles to a domain as complex as that of VMs is not only feasible but beneficial, as it improves understandability, maintainability, and configurability of VM implementations without harming performance.
Dieses Lehrvideo zeigt aus der Perspektive einer Übertischkamera den fiktiven informatischen Hochleister Tom bei der Bearbeitung eines schwierigen Färbeproblems. Dabei kann man die fortlaufend von ihm angefertigten Skizzen beobachten und seine Gedankengänge genau verfolgen. Denn dieser Problemlöser arbeitet unter lautem Denken, d. h. er spricht alle seine Gedankengänge laut aus. Man kann zuschauen, wie Tom zunächst die Aufgabe analysiert und die dadurch gewonnenen Erkenntnisse in der anschließenden Problembearbeitung gewinnbringend einsetzt. Der Zuschauer wird dabei aber nicht allein gelassen. An markanten Stellen wird das Video unterbrochen und Toms zurückliegende Aktivitäten mit animierten Bildsequenzen vertiefend erläutert. Schwache Problemlöser können so die in Unterricht oder Vorlesung vermittelten Kenntnisse über informatische Problemlösemethoden vertiefen und deren Anwendung durch einen starken Problemlöser beispielhaft miterleben. Entstanden ist dieses Video aus einer Vergleichsstudie mit starken und schwachen Problemlösern. Die effizienten Methoden der Hochleister wurden didaktisch aufgearbeitet und zu einem modellhaften Problemlöseprozess zusammengesetzt. Der wissenschaftliche Hintergrund des Lehrvideos wird durch eine als Bildergeschichte erzählte Rahmenhandlung verdeutlicht. Bei Erstsemesterstudenten der Informatik, denen dieses Video zur Bewertung vorgespielt wurde, fand dieses Konzept große Zustimmung. Tenor: Unterhaltsam und lehrreich zugleich.
Dutch allows for variation as to whether the first position in the sentence is occupied by the subject or by some other constituent, such as the direct object. In particular situations, however, this commonly observed variation in word order is ‘frozen’ and only the subject appears in first position. We hypothesize that this partial freezing of word order in Dutch can be explained from the dependence of the speaker’s choice of word order on the hearer’s interpretation of this word order. A formal model of this interaction between the speaker’s perspective and the hearer’s perspective is presented in terms of bidirectional Optimality Theory. Empirical predictions of this model regarding the interaction between word order and definiteness are confirmed by a quantitative corpus study.
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem. Einerseits ist eine hohe Rechenleistung erforderlich. Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist. Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt. Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren. Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden. Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft. Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen. Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten. Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren. Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke. Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV. Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch. Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung. Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert.
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
Most of the microelectronic circuits fabricated today are synchronous, i.e. they are driven by one or several clock signals. Synchronous circuit design faces several fundamental challenges such as high-speed clock distribution, integration of multiple cores operating at different clock rates, reduction of power consumption and dealing with voltage, temperature, manufacturing and runtime variations. Asynchronous or clockless design plays a key role in alleviating these challenges, however the design and test of asynchronous circuits is much more difficult in comparison to their synchronous counterparts. A driving force for a widespread use of asynchronous technology is the availability of mature EDA (Electronic Design Automation) tools which provide an entire automated design flow starting from an HDL (Hardware Description Language) specification yielding the final circuit layout. Even though there was much progress in developing such EDA tools for asynchronous circuit design during the last two decades, the maturity level as well as the acceptance of them is still not comparable with tools for synchronous circuit design. In particular, logic synthesis (which implies the application of Boolean minimisation techniques) for the entire system's control path can significantly improve the efficiency of the resulting asynchronous implementation, e.g. in terms of chip area and performance. However, logic synthesis, in particular for asynchronous circuits, suffers from complexity problems. Signal Transitions Graphs (STGs) are labelled Petri nets which are a widely used to specify the interface behaviour of speed independent (SI) circuits - a robust subclass of asynchronous circuits. STG decomposition is a promising approach to tackle complexity problems like state space explosion in logic synthesis of SI circuits. The (structural) decomposition of STGs is guided by a partition of the output signals and generates a usually much smaller component STG for each partition member, i.e. a component STG with a much smaller state space than the initial specification. However, decomposition can result in component STGs that in isolation have so-called irreducible CSC conflicts (i.e. these components are not SI synthesisable anymore) even if the specification has none of them. A new approach is presented to avoid such conflicts by introducing internal communication between the components. So far, STG decompositions are guided by the finest output partitions, i.e. one output per component. However, this might not yield optimal circuit implementations. Efficient heuristics are presented to determine coarser partitions leading to improved circuits in terms of chip area. For the new algorithms correctness proofs are given and their implementations are incorporated into the decomposition tool DESIJ. The presented techniques are successfully applied to some benchmarks - including 'real-life' specifications arising in the context of control resynthesis - which delivered promising results.
Im Mittelpunkt dieser Arbeit stehen virtuelle 3D-Stadtmodelle, die Objekte, Phänomene und Prozesse in urbanen Räumen in digitaler Form repräsentieren. Sie haben sich zu einem Kernthema von Geoinformationssystemen entwickelt und bilden einen zentralen Bestandteil geovirtueller 3D-Welten. Virtuelle 3D-Stadtmodelle finden nicht nur Verwendung als Mittel für Experten in Bereichen wie Stadtplanung, Funknetzplanung, oder Lärmanalyse, sondern auch für allgemeine Nutzer, die realitätsnah dargestellte virtuelle Städte in Bereichen wie Bürgerbeteiligung, Tourismus oder Unterhaltung nutzen und z. B. in Anwendungen wie GoogleEarth eine räumliche Umgebung intuitiv erkunden und durch eigene 3D-Modelle oder zusätzliche Informationen erweitern. Die Erzeugung und Darstellung virtueller 3D-Stadtmodelle besteht aus einer Vielzahl von Prozessschritten, von denen in der vorliegenden Arbeit zwei näher betrachtet werden: Texturierung und Visualisierung. Im Bereich der Texturierung werden Konzepte und Verfahren zur automatischen Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern sowie zur Speicherung oberflächengebundener Daten in virtuellen 3D-Stadtmodellen entwickelt. Im Bereich der Visualisierung werden Konzepte und Verfahren für die multiperspektivische Darstellung sowie für die hochqualitative Darstellung nichtlinearer Projektionen virtueller 3D-Stadtmodelle in interaktiven Systemen vorgestellt. Die automatische Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern ermöglicht die Veredelung vorliegender virtueller 3D-Stadtmodelle. Schrägluftbilder bieten sich zur Texturierung an, da sie einen Großteil der Oberflächen einer Stadt, insbesondere Gebäudefassaden, mit hoher Redundanz erfassen. Das Verfahren extrahiert aus dem verfügbaren Bildmaterial alle Ansichten einer Oberfläche und fügt diese pixelpräzise zu einer Textur zusammen. Durch Anwendung auf alle Oberflächen wird das virtuelle 3D-Stadtmodell flächendeckend texturiert. Der beschriebene Ansatz wurde am Beispiel des offiziellen Berliner 3D-Stadtmodells sowie der in GoogleEarth integrierten Innenstadt von München erprobt. Die Speicherung oberflächengebundener Daten, zu denen auch Texturen zählen, wurde im Kontext von CityGML, einem international standardisierten Datenmodell und Austauschformat für virtuelle 3D-Stadtmodelle, untersucht. Es wird ein Datenmodell auf Basis computergrafischer Konzepte entworfen und in den CityGML-Standard integriert. Dieses Datenmodell richtet sich dabei an praktischen Anwendungsfällen aus und lässt sich domänenübergreifend verwenden. Die interaktive multiperspektivische Darstellung virtueller 3D-Stadtmodelle ergänzt die gewohnte perspektivische Darstellung nahtlos um eine zweite Perspektive mit dem Ziel, den Informationsgehalt der Darstellung zu erhöhen. Diese Art der Darstellung ist durch die Panoramakarten von H. C. Berann inspiriert; Hauptproblem ist die Übertragung des multiperspektivischen Prinzips auf ein interaktives System. Die Arbeit stellt eine technische Umsetzung dieser Darstellung für 3D-Grafikhardware vor und demonstriert die Erweiterung von Vogel- und Fußgängerperspektive. Die hochqualitative Darstellung nichtlinearer Projektionen beschreibt deren Umsetzung auf 3D-Grafikhardware, wobei neben der Bildwiederholrate die Bildqualität das wesentliche Entwicklungskriterium ist. Insbesondere erlauben die beiden vorgestellten Verfahren, dynamische Geometrieverfeinerung und stückweise perspektivische Projektionen, die uneingeschränkte Nutzung aller hardwareseitig verfügbaren, qualitätssteigernden Funktionen wie z.~B. Bildraumgradienten oder anisotroper Texturfilterung. Beide Verfahren sind generisch und unterstützen verschiedene Projektionstypen. Sie ermöglichen die anpassungsfreie Verwendung gängiger computergrafischer Effekte wie Stilisierungsverfahren oder prozeduraler Texturen für nichtlineare Projektionen bei optimaler Bildqualität. Die vorliegende Arbeit beschreibt wesentliche Technologien für die Verarbeitung virtueller 3D-Stadtmodelle: Zum einen lassen sich mit den Ergebnissen der Arbeit Texturen für virtuelle 3D-Stadtmodelle automatisiert herstellen und als eigenständige Attribute in das virtuelle 3D-Stadtmodell einfügen. Somit trägt diese Arbeit dazu bei, die Herstellung und Fortführung texturierter virtueller 3D-Stadtmodelle zu verbessern. Zum anderen zeigt die Arbeit Varianten und technische Lösungen für neuartige Projektionstypen für virtueller 3D-Stadtmodelle in interaktiven Visualisierungen. Solche nichtlinearen Projektionen stellen Schlüsselbausteine dar, um neuartige Benutzungsschnittstellen für und Interaktionsformen mit virtuellen 3D-Stadtmodellen zu ermöglichen, insbesondere für mobile Geräte und immersive Umgebungen.
Business process models are abstractions of concrete operational procedures that occur in the daily business of organizations. To cope with the complexity of these models, business process model abstraction has been introduced recently. Its goal is to derive from a detailed process model several abstract models that provide a high-level understanding of the process. While techniques for constructing abstract models are reported in the literature, little is known about the relationships between process instances and abstract models. In this paper we show how the state of an abstract activity can be calculated from the states of related, detailed process activities as they happen. The approach uses activity state propagation. With state uniqueness and state transition correctness we introduce formal properties that improve the understanding of state propagation. Algorithms to check these properties are devised. Finally, we use behavioral profiles to identify and classify behavioral inconsistencies in abstract process models that might occur, once activity state propagation is used.
Preference handling and optimization are indispensable means for addressing nontrivial applications in Answer Set Programming (ASP). However, their implementation becomes difficult whenever they bring about a significant increase in computational complexity. As a consequence, existing ASP systems do not offer complex optimization capacities, supporting, for instance, inclusion-based minimization or Pareto efficiency. Rather, such complex criteria are typically addressed by resorting to dedicated modeling techniques, like saturation. Unlike the ease of common ASP modeling, however, these techniques are rather involved and hardly usable by ASP laymen. We address this problem by developing a general implementation technique by means of meta-prpogramming, thus reusing existing ASP systems to capture various forms of qualitative preferences among answer sets. In this way, complex preferences and optimization capacities become readily available for ASP applications.
Business Process Management (BPM) emerged as a means to control, analyse, and optimise business operations. Conceptual models are of central importance for BPM. Most prominently, process models define the behaviour that is performed to achieve a business value. In essence, a process model is a mapping of properties of the original business process to the model, created for a purpose. Different modelling purposes, therefore, result in different models of a business process. Against this background, the misalignment of process models often observed in the field of BPM is no surprise. Even if the same business scenario is considered, models created for strategic decision making differ in content significantly from models created for process automation. Despite their differences, process models that refer to the same business process should be consistent, i.e., free of contradictions. Apparently, there is a trade-off between strictness of a notion of consistency and appropriateness of process models serving different purposes. Existing work on consistency analysis builds upon behaviour equivalences and hierarchical refinements between process models. Hence, these approaches are computationally hard and do not offer the flexibility to gradually relax consistency requirements towards a certain setting. This thesis presents a framework for the analysis of behaviour consistency that takes a fundamentally different approach. As a first step, an alignment between corresponding elements of related process models is constructed. Then, this thesis conducts behavioural analysis grounded on a relational abstraction of the behaviour of a process model, its behavioural profile. Different variants of these profiles are proposed, along with efficient computation techniques for a broad class of process models. Using behavioural profiles, consistency of an alignment between process models is judged by different notions and measures. The consistency measures are also adjusted to assess conformance of process logs that capture the observed execution of a process. Further, this thesis proposes various complementary techniques to support consistency management. It elaborates on how to implement consistent change propagation between process models, addresses the exploration of behavioural commonalities and differences, and proposes a model synthesis for behavioural profiles.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
Answer Set Programming (ASP) is an emerging paradigm for declarative programming, in which a computational problem is specified by a logic program such that particular models, called answer sets, match solutions. ASP faces a growing range of applications, demanding for high-performance tools able to solve complex problems. ASP integrates ideas from a variety of neighboring fields. In particular, automated techniques to search for answer sets are inspired by Boolean Satisfiability (SAT) solving approaches. While the latter have firm proof-theoretic foundations, ASP lacks formal frameworks for characterizing and comparing solving methods. Furthermore, sophisticated search patterns of modern SAT solvers, successfully applied in areas like, e.g., model checking and verification, are not yet established in ASP solving. We address these deficiencies by, for one, providing proof-theoretic frameworks that allow for characterizing, comparing, and analyzing approaches to answer set computation. For another, we devise modern ASP solving algorithms that integrate and extend state-of-the-art techniques for Boolean constraint solving. We thus contribute to the understanding of existing ASP solving approaches and their interconnections as well as to their enhancement by incorporating sophisticated search patterns. The central idea of our approach is to identify atomic as well as composite constituents of a propositional logic program with Boolean variables. This enables us to describe fundamental inference steps, and to selectively combine them in proof-theoretic characterizations of various ASP solving methods. In particular, we show that different concepts of case analyses applied by existing ASP solvers implicate mutual exponential separations regarding their best-case complexities. We also develop a generic proof-theoretic framework amenable to language extensions, and we point out that exponential separations can likewise be obtained due to case analyses on them. We further exploit fundamental inference steps to derive Boolean constraints characterizing answer sets. They enable the conception of ASP solving algorithms including search patterns of modern SAT solvers, while also allowing for direct technology transfers between the areas of ASP and SAT solving. Beyond the search for one answer set of a logic program, we address the enumeration of answer sets and their projections to a subvocabulary, respectively. The algorithms we develop enable repetition-free enumeration in polynomial space without being intrusive, i.e., they do not necessitate any modifications of computations before an answer set is found. Our approach to ASP solving is implemented in clasp, a state-of-the-art Boolean constraint solver that has successfully participated in recent solver competitions. Although we do here not address the implementation techniques of clasp or all of its features, we present the principles of its success in the context of ASP solving.
The World Wide Web as an application platform becomes increasingly important. However, the development of Web applications is often more complex than for the desktop. Web-based development environments like Lively Webwerkstatt can mitigate this problem by making the development process more interactive and direct. By moving the development environment into the Web, applications can be developed collaboratively in a Wiki-like manner. This report documents the results of the project seminar on Web-based Development Environments 2010. In this seminar, participants extended the Web-based development environment Lively Webwerkstatt. They worked in small teams on current research topics from the field of Web-development and tool support for programmers and implemented their results in the Webwerkstatt environment.
Version Control Systems (VCS) allow developers to manage changes to software artifacts. Developers interact with VCSs through a variety of client programs, such as graphical front-ends or command line tools. It is desirable to use the same version control client program against different VCSs. Unfortunately, no established abstraction over VCS concepts exists. Instead, VCS client programs implement ad-hoc solutions to support interaction with multiple VCSs. This thesis presents Pur, an abstraction over version control concepts that allows building rich client programs that can interact with multiple VCSs. We provide an implementation of this abstraction and validate it by implementing a client application.
In Kooperation mit Partnern aus der Industrie etabliert das Hasso-Plattner-Institut (HPI) ein “HPI Future SOC Lab”, das eine komplette Infrastruktur von hochkomplexen on-demand Systemen auf neuester, am Markt noch nicht verfügbarer, massiv paralleler (multi-/many-core) Hardware mit enormen Hauptspeicherkapazitäten und dafür konzipierte Software bereitstellt. Das HPI Future SOC Lab verfügt über prototypische 4- und 8-way Intel 64-Bit Serversysteme von Fujitsu und Hewlett-Packard mit 32- bzw. 64-Cores und 1 - 2 TB Hauptspeicher. Es kommen weiterhin hochperformante Speichersysteme von EMC² sowie Virtualisierungslösungen von VMware zum Einsatz. SAP stellt ihre neueste Business by Design (ByD) Software zur Verfügung und auch komplexe reale Unternehmensdaten stehen zur Verfügung, auf die für Forschungszwecke zugegriffen werden kann. Interessierte Wissenschaftler aus universitären und außeruniversitären Forschungsinstitutionen können im HPI Future SOC Lab zukünftige hoch-komplexe IT-Systeme untersuchen, neue Ideen / Datenstrukturen / Algorithmen entwickeln und bis hin zur praktischen Erprobung verfolgen. Dieser Technische Bericht stellt erste Ergebnisse der im Rahmen der Eröffnung des Future SOC Labs im Juni 2010 gestarteten Forschungsprojekte vor. Ausgewählte Projekte stellten ihre Ergebnisse am 27. Oktober 2010 im Rahmen der Future SOC Lab Tag Veranstaltung vor.
"Forschung meets Business" - diese Kombination hat in den vergangenen Jahren immer wieder zu zahlreichen interessanten und fruchtbaren Diskussionen geführt. Mit dem Symposium "Sicherheit in Service-orientierten Architekturen" führt das Hasso-Plattner-Institut diese Tradition fort und lud alle Interessenten zu einem zweitägigen Symposium nach Potsdam ein, um gemeinsam mit Fachvertretern aus der Forschung und Industrie über die aktuellen Entwicklungen im Bereich Sicherheit von SOA zu diskutieren. Die im Rahmen dieses Symposiums vorgestellten Beiträge fokussieren sich auf die Sicherheitsthemen "Sichere Digitale Identitäten und Identitätsmanagement", "Trust Management", "Modell-getriebene SOA-Sicherheit", "Datenschutz und Privatsphäre", "Sichere Enterprise SOA", und "Sichere IT-Infrastrukturen".
Service-oriented Architectures (SOA) facilitate the provision and orchestration of business services to enable a faster adoption to changing business demands. Web Services provide a technical foundation to implement this paradigm on the basis of XML-messaging. However, the enhanced flexibility of message-based systems comes along with new threats and risks. To face these issues, a variety of security mechanisms and approaches is supported by the Web Service specifications. The usage of these security mechanisms and protocols is configured by stating security requirements in security policies. However, security policy languages for SOA are complex and difficult to create due to the expressiveness of these languages. To facilitate and simplify the creation of security policies, this thesis presents a model-driven approach that enables the generation of complex security policies on the basis of simple security intentions. SOA architects can specify these intentions in system design models and are not required to deal with complex technical security concepts. The approach introduced in this thesis enables the enhancement of any system design modelling languages – for example FMC or BPMN – with security modelling elements. The syntax, semantics, and notion of these elements is defined by our security modelling language SecureSOA. The metamodel of this language provides extension points to enable the integration into system design modelling languages. In particular, this thesis demonstrates the enhancement of FMC block diagrams with SecureSOA. To enable the model-driven generation of security policies, a domain-independent policy model is introduced in this thesis. This model provides an abstraction layer for security policies. Mappings are used to perform the transformation from our model to security policy languages. However, expert knowledge is required to generate instances of this model on the basis of simple security intentions. Appropriate security mechanisms, protocols and options must be chosen and combined to fulfil these security intentions. In this thesis, a formalised system of security patterns is used to represent this knowledge and to enable an automated transformation process. Moreover, a domain-specific language is introduced to state security patterns in an accessible way. On the basis of this language, a system of security configuration patterns is provided to transform security intentions related to data protection and identity management. The formal semantics of the security pattern language enable the verification of the transformation process introduced in this thesis and prove the correctness of the pattern application. Finally, our SOA Security LAB is presented that demonstrates the application of our model-driven approach to facilitate a dynamic creation, configuration, and execution of secure Web Service-based composed applications.
IT systems for healthcare are a complex and exciting field. One the one hand, there is a vast number of improvements and work alleviations that computers can bring to everyday healthcare. Some ways of treatment, diagnoses and organisational tasks were even made possible by computer usage in the first place. On the other hand, there are many factors that encumber computer usage and make development of IT systems for healthcare a challenging, sometimes even frustrating task. These factors are not solely technology-related, but just as well social or economical conditions. This report describes some of the idiosyncrasies of IT systems in the healthcare domain, with a special focus on legal regulations, standards and security.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
We introduce an approach to detecting inconsistencies in large biological networks by using answer set programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on answer set programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.
Building biological models by inferring functional dependencies from experimental data is an important issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available approaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuristic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by appeal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several benefits over the existing heuristic algorithms. First, it is declarative and thus transparent for biological experts. Second, it is elaboration tolerant and thus allows for an easy exploration and incorporation of biological constraints. Third, it allows for exploring the entire space of possible models. Finally, our approach offers an excellent performance, matching existing, special-purpose systems.
Using the notion of an elementary loop, Gebser and Schaub (2005. Proceedings of the Eighth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’05 ), 53–65) refined the theorem on loop formulas attributable to Lin and Zhao (2004) by considering loop formulas of elementary loops only. In this paper, we reformulate the definition of an elementary loop, extend it to disjunctive programs, and study several properties of elementary loops, including how maximal elementary loops are related to minimal unfounded sets. The results provide useful insights into the stable model semantics in terms of elementary loops. For a nondisjunctive program, using a graph-theoretic characterization of an elementary loop, we show that the problem of recognizing an elementary loop is tractable. On the other hand, we also show that the corresponding problem is coNP-complete for a disjunctive program. Based on the notion of an elementary loop, we present the class of Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class of Head-Cycle-Free (HCF) programs attributable to Ben-Eliyahu and Dechter (1994. Annals of Mathematics and Artificial Intelligence 12, 53–87). Like an HCF program, an HEF program can be turned into an equivalent nondisjunctive program in polynomial time by shifting head atoms into the body.
The exponential expanding of the numbers of web sites and Internet users makes WWW the most important global information resource. From information publishing and electronic commerce to entertainment and social networking, the Web allows an inexpensive and efficient access to the services provided by individuals and institutions. The basic units for distributing these services are the web sites scattered throughout the world. However, the extreme fragility of web services and content, the high competence between similar services supplied by different sites, and the wide geographic distributions of the web users drive the urgent requirement from the web managers to track and understand the usage interest of their web customers. This thesis, "X-tracking the Usage Interest on Web Sites", aims to fulfill this requirement. "X" stands two meanings: one is that the usage interest differs from various web sites, and the other is that usage interest is depicted from multi aspects: internal and external, structural and conceptual, objective and subjective. "Tracking" shows that our concentration is on locating and measuring the differences and changes among usage patterns. This thesis presents the methodologies on discovering usage interest on three kinds of web sites: the public information portal site, e-learning site that provides kinds of streaming lectures and social site that supplies the public discussions on IT issues. On different sites, we concentrate on different issues related with mining usage interest. The educational information portal sites were the first implementation scenarios on discovering usage patterns and optimizing the organization of web services. In such cases, the usage patterns are modeled as frequent page sets, navigation paths, navigation structures or graphs. However, a necessary requirement is to rebuild the individual behaviors from usage history. We give a systematic study on how to rebuild individual behaviors. Besides, this thesis shows a new strategy on building content clusters based on pair browsing retrieved from usage logs. The difference between such clusters and the original web structure displays the distance between the destinations from usage side and the expectations from design side. Moreover, we study the problem on tracking the changes of usage patterns in their life cycles. The changes are described from internal side integrating conceptual and structure features, and from external side for the physical features; and described from local side measuring the difference between two time spans, and global side showing the change tendency along the life cycle. A platform, Web-Cares, is developed to discover the usage interest, to measure the difference between usage interest and site expectation and to track the changes of usage patterns. E-learning site provides the teaching materials such as slides, recorded lecture videos and exercise sheets. We focus on discovering the learning interest on streaming lectures, such as real medias, mp4 and flash clips. Compared to the information portal site, the usage on streaming lectures encapsulates the variables such as viewing time and actions during learning processes. The learning interest is discovered in the form of answering 6 questions, which covers finding the relations between pieces of lectures and the preference among different forms of lectures. We prefer on detecting the changes of learning interest on the same course from different semesters. The differences on the content and structure between two courses leverage the changes on the learning interest. We give an algorithm on measuring the difference on learning interest integrated with similarity comparison between courses. A search engine, TASK-Moniminer, is created to help the teacher query the learning interest on their streaming lectures on tele-TASK site. Social site acts as an online community attracting web users to discuss the common topics and share their interesting information. Compared to the public information portal site and e-learning web site, the rich interactions among users and web content bring the wider range of content quality, on the other hand, provide more possibilities to express and model usage interest. We propose a framework on finding and recommending high reputation articles in a social site. We observed that the reputation is classified into global and local categories; the quality of the articles having high reputation is related with the content features. Based on these observations, our framework is implemented firstly by finding the articles having global or local reputation, and secondly clustering articles based on their content relations, and then the articles are selected and recommended from each cluster based on their reputation ranks.
Virtualisierung und Cloud Computing gehören derzeit zu den wichtigsten Schlagworten für Betreiber von IT Infrastrukturen. Es gibt eine Vielzahl unterschiedlicher Technologien, Produkte und Geschäftsmodelle für vollkommen verschiedene Anwendungsszenarien. Die vorliegende Studie gibt zunächst einen detaillierten Überblick über aktuelle Entwicklungen in Konzepten und Technologien der Virtualisierungstechnologie – von klassischer Servervirtualisierung über Infrastrukturen für virtuelle Arbeitsplätze bis zur Anwendungsvirtualisierung und macht den Versuch einer Klassifikation der Virtualisierungsvarianten. Bei der Betrachtung des Cloud Computing-Konzepts werden deren Grundzüge sowie verschiedene Architekturvarianten und Anwendungsfälle eingeführt. Die ausführliche Untersuchung von Vorteilen des Cloud Computing sowie möglicher Bedenken, die bei der Nutzung von Cloud-Ressourcen im Unternehmen beachtet werden müssen, zeigt, dass Cloud Computing zwar große Chancen bietet, aber nicht für jede Anwendung und nicht für jeden rechtlichen und wirtschaftlichen Rahmen in Frage kommt.. Die anschließende Marktübersicht für Virtualisierungstechnologie zeigt, dass die großen Hersteller – Citrix, Microsoft und VMware – jeweils Produkte für fast alle Virtualisierungsvarianten anbieten und hebt entscheidende Unterschiede bzw. die Stärken der jeweiligen Anbieter heraus. So ist beispielsweise die Lösung von Citrix für Virtual Desktop Infrastructures sehr ausgereift, während Microsoft hier nur auf Standardtechnologie zurückgreifen kann. VMware hat als Marktführer die größte Verbreitung in Rechenzentren gefunden und bietet als einziger Hersteller echte Fehlertoleranz. Microsoft hingegen punktet mit der nahtlosen Integration ihrer Virtualisierungsprodukte in bestehende Windows-Infrastrukturen. Im Bereich der Cloud Computing-Systeme zeigen sich einige quelloffene Softwareprojekte, die durchaus für den produktiven Betrieb von sogenannten privaten Clouds geeignet sind.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
In-Memory Data Management
(2012)
Nach 50 Jahren erfolgreicher Entwicklunghat die Business-IT einen neuenWendepunkt erreicht. Hier zeigen die Autoren erstmalig, wieIn-Memory Computing dieUnternehmensprozesse künftig verändern wird. Bisher wurden Unternehmensdaten aus Performance-Gründen auf verschiedene Datenbanken verteilt: Analytische Datenresidieren in Data Warehouses und werden regelmäßig mithilfe transaktionaler Systeme synchronisiert. Diese Aufspaltung macht flexibles Echtzeit-Reporting aktueller Daten unmöglich. Doch dank leistungsfähigerMulti-Core-CPUs, großer Hauptspeicher, Cloud Computing und immerbesserer mobiler Endgeräte lassen die Unternehmen dieses restriktive Modell zunehmend hinter sich. Die Autoren stellen Techniken vor, die eine analytische und transaktionale Verarbeitung in Echtzeit erlauben und so dem Geschäftsleben neue Wege bahnen.
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning. Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters. Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action. We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data. We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift. In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods.
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
The field of machine learning studies algorithms that infer predictive models from data. Predictive models are applicable for many practical tasks such as spam filtering, face and handwritten digit recognition, and personalized product recommendation. In general, they are used to predict a target label for a given data instance. In order to make an informed decision about the deployment of a predictive model, it is crucial to know the model’s approximate performance. To evaluate performance, a set of labeled test instances is required that is drawn from the distribution the model will be exposed to at application time. In many practical scenarios, unlabeled test instances are readily available, but the process of labeling them can be a time- and cost-intensive task and may involve a human expert. This thesis addresses the problem of evaluating a given predictive model accurately with minimal labeling effort. We study an active model evaluation process that selects certain instances of the data according to an instrumental sampling distribution and queries their labels. We derive sampling distributions that minimize estimation error with respect to different performance measures such as error rate, mean squared error, and F-measures. An analysis of the distribution that governs the estimator leads to confidence intervals, which indicate how precise the error estimation is. Labeling costs may vary across different instances depending on certain characteristics of the data. For instance, documents differ in their length, comprehensibility, and technical requirements; these attributes affect the time a human labeler needs to judge relevance or to assign topics. To address this, the sampling distribution is extended to incorporate instance-specific costs. We empirically study conditions under which the active evaluation processes are more accurate than a standard estimate that draws equally many instances from the test distribution. We also address the problem of comparing the risks of two predictive models. The standard approach would be to draw instances according to the test distribution, label the selected instances, and apply statistical tests to identify significant differences. Drawing instances according to an instrumental distribution affects the power of a statistical test. We derive a sampling procedure that maximizes test power when used to select instances, and thereby minimizes the likelihood of choosing the inferior model. Furthermore, we investigate the task of comparing several alternative models; the objective of an evaluation could be to rank the models according to the risk that they incur or to identify the model with lowest risk. An experimental study shows that the active procedure leads to higher test power than the standard test in many application domains. Finally, we study the problem of evaluating the performance of ranking functions, which are used for example for web search. In practice, ranking performance is estimated by applying a given ranking model to a representative set of test queries and manually assessing the relevance of all retrieved items for each query. We apply the concepts of active evaluation and active comparison to ranking functions and derive optimal sampling distributions for the commonly used performance measures Discounted Cumulative Gain and Expected Reciprocal Rank. Experiments on web search engine data illustrate significant reductions in labeling costs.
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover.
F2C2
(2012)
Background: Flux coupling analysis (FCA) has become a useful tool in the constraint-based analysis of genome-scale metabolic networks. FCA allows detecting dependencies between reaction fluxes of metabolic networks at steady-state. On the one hand, this can help in the curation of reconstructed metabolic networks by verifying whether the coupling between reactions is in agreement with the experimental findings. On the other hand, FCA can aid in defining intervention strategies to knock out target reactions.
Results: We present a new method F2C2 for FCA, which is orders of magnitude faster than previous approaches. As a consequence, FCA of genome-scale metabolic networks can now be performed in a routine manner.
Conclusions: We propose F2C2 as a fast tool for the computation of flux coupling in genome-scale metabolic networks. F2C2 is freely available for non-commercial use at https://sourceforge.net/projects/f2c2/files/.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Fußverkehr findet im gesamten öffentlichen Raum statt und ermöglicht die lückenlose Verbindung von Tür zu Tür. Jeder Mensch steht vor Beginn einer Fortbewegung vor den Fragen „Wo bin ich?“, „Wo liegt mein Ziel?“ und „Wie komme ich dahin?“. Ein Großteil der auf dem Markt befindlichen Navigationssysteme für Fußgänger stellen reduzierte Varianten aus Fahrzeugen dar und basieren auf 2D- Kartendarstellungen oder bilden die Realität als dreidimensionales Modell ab. Navigationsprobleme entstehen dann, wenn es dem Nutzer nicht gelingt, die Information aus der Anweisung auf die Wirklichkeit zu beziehen und umzusetzen. Ein möglicher Grund dafür liegt in der Visualisierung der Navigationsanweisung. Die räumliche Wahrnehmung des Menschen erfolgt ausgehend von einem bestimmten Betrachtungsstandpunkt und bringt die Lage von Objekten und deren Beziehung zueinander zum Ausdruck. Der Einsatz von Augmented Reality (erweiterte Realität) entspricht dem Erscheinungsbild der menschlichen Wahrnehmung und ist für Menschen eine natürliche und zugleich vertraute Ansichtsform. Im Unterschied zu kartographischer Visualisierung wird die Umwelt mittels Augmented Reality nicht modelliert, sondern realitätsgetreu abgebildet und ergänzt. Das Ziel dieser Arbeit ist ein Navigationsverfahren, das der natürlichen Fort-bewegung und Sichtweise von Fußgängern gerecht wird. Das Konzept basiert auf dem Einsatz einer Kombination aus Realität und virtueller Realität zu einer erweiterten Ansicht. Da keine Darstellungsform als die Route selbst besser geeignet ist, um einen Routenverlauf zu beschreiben, wird die Realität durch eine virtuelle Route erweitert. Die perspektivische Anpassung der Routendarstellung erfordert die sensorische Erfassung der Position und Lage des Betrachtungsstandpunktes. Das der Navigation zu Grunde liegende Datenmodell bleibt dem Betrachter dabei verborgen und ist nur in Form der erweiterten Realität sichtbar. Der im Rahmen dieser Arbeit entwickelte Prototyp trägt die Bezeichnung RealityView. Die Basis bildet ein freies und quelloffenes Navigationssystem, das für die Fußgängernavigation modular erweitert wurde. Das Ergebnis ist ein smartphonebasierter Navigationsprototyp, in dem die Ansichtsform einer zweidimensionalen Bildschirmkarte im Grundriss und die Darstellung einer erweiterten Realität im Aufriss kombiniert werden. Die Evaluation des Prototyps bestätigt die Hypothese, dass der Einsatz von Augmented Reality für die Navigation von Fußgängern möglich ist und von der Nutzergruppe akzeptiert wird. Darüber hinaus bescheinigen Wissenschaftler im Rahmen von Experten-interviews den konzeptionellen Ansatz und die prototypische Umsetzung des RealityView. Die Auswertung einer Eye-Tracking-Pilotstudie erbrachte den Nachweis, dass Fußgänger die Navigationsanweisung auf markante Objekte der Umwelt beziehen, deren Auswahl durch den Einsatz von Augmented Reality begünstigt wird.
The constantly growing capacity of reconfigurable devices allows simultaneous execution of complex applications on those devices. The mere diversity of applications deems it impossible to design an interconnection network matching the requirements of every possible application perfectly, leading to suboptimal performance in many cases. However, the architecture of the interconnection network is not the only aspect affecting performance of communication. The resource manager places applications on the device and therefore influences latency between communicating partners and overall network load. Communication protocols affect performance by introducing data and processing overhead putting higher load on the network and increasing resource demand. Approaching communication holistically not only considers the architecture of the interconnect, but communication-aware resource management, communication protocols and resource usage just as well. Incorporation of different parts of a reconfigurable system during design- and runtime and optimizing them with respect to communication demand results in more resource efficient communication. Extensive evaluation shows enhanced performance and flexibility, if communication on reconfigurable devices is regarded in a holistic fashion.
In continuation of a successful series of events, the 4th Many-core Applications Research Community (MARC) symposium took place at the HPI in Potsdam on December 8th and 9th 2011. Over 60 researchers from different fields presented their work on many-core hardware architectures, their programming models, and the resulting research questions for the upcoming generation of heterogeneous parallel systems.
MDE techniques are more and more used in praxis. However, there is currently a lack of detailed reports about how different MDE techniques are integrated into the development and combined with each other. To learn more about such MDE settings, we performed a descriptive and exploratory field study with SAP, which is a worldwide operating company with around 50.000 employees and builds enterprise software applications. This technical report describes insights we got during this study. For example, we identified that MDE settings are subject to evolution. Finally, this report outlines directions for future research to provide practical advises for the application of MDE settings.
ASP modulo CSP
(2012)
We present the hybrid ASP solver clingcon, combining the simple modeling language and the high performance Boolean solving capacities of Answer Set Programming (ASP) with techniques for using non-Boolean constraints from the area of Constraint Programming (CP). The new clingcon system features an extended syntax supporting global constraints and optimize statements for constraint variables. The major technical innovation improves the interaction between ASP and CP solver through elaborated learning techniques based on irreducible inconsistent sets. A broad empirical evaluation shows that these techniques yield a performance improvement of an order of magnitude.
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
We present the new multi-threaded version of the state-of-the-art answer set solver clasp. We detail its component and communication architecture and illustrate how they support the principal functionalities of clasp. Also, we provide some insights into the data representation used for different constraint types handled by clasp. All this is accompanied by an extensive experimental analysis of the major features related to multi-threading in clasp.
Am 1. und 2. Dezember 2011 fand am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH in Potsdam der 4. Deutsche IPv6 Gipfel 2011 statt, dessen Dokumentation der vorliegende technische Report dient. Wie mit den vorhergegangenen nationalen IPv6-Gipfeln verfolgte der Deutsche IPv6-Rat auch mit dem 4. Gipfel, der unter dem Motto „Online on the Road - Der neue Standard IPv6 als Treiber der mobilen Kommunikation” stand, das Ziel, Einblicke in aktuelle Entwicklungen rund um den Einsatz von IPv6 diesmal mit einem Fokus auf die automobile Vernetzung zu geben. Gleichzeitig wurde betont, den effizienten und flächendeckenden Umstieg auf IPv6 voranzutreiben, Erfahrungen mit dem Umstieg auf und dem Einsatz von IPv6 auszutauschen, Wirtschaft und öffentliche Verwaltung zu ermutigen und motivieren, IPv6-basierte Lösungen einzusetzen und das öffentliche Problembewusstsein für die Notwendigkeit des Umstiegs auf IPv6 zu erhöhen. Ehrengast war in diesem Jahr die EU-Kommissarin für die Digitale Agenda, Neelie Kroes deren Vortrag von weiteren Beiträgen hochrangiger Vertretern aus Politik, Wissenschaft und Wirtschaft ergänzt wurde.
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
.NET Gadgeteer Workshop
(2013)
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.