Refine
Has Fulltext
- yes (221) (remove)
Year of publication
Document Type
- Monograph/Edited Volume (221) (remove)
Language
- English (221) (remove)
Is part of the Bibliography
- yes (221) (remove)
Keywords
- Hasso-Plattner-Institut (9)
- Forschungskolleg (8)
- Hasso Plattner Institute (8)
- Klausurtagung (8)
- Service-oriented Systems Engineering (8)
- Forschungsprojekte (5)
- Future SOC Lab (5)
- In-Memory Technologie (5)
- Modellierung (5)
- Multicore Architekturen (5)
- Ph.D. retreat (5)
- cloud computing (5)
- cyber-physical systems (5)
- quantitative analysis (5)
- service-oriented systems engineering (5)
- Cloud Computing (4)
- Research School (4)
- multicore architectures (4)
- nested graph conditions (4)
- probabilistic timed systems (4)
- qualitative Analyse (4)
- qualitative analysis (4)
- quantitative Analyse (4)
- research projects (4)
- research school (4)
- Curriculum Framework (3)
- Datenintegration (3)
- European values education (3)
- Europäische Werteerziehung (3)
- Graphtransformationen (3)
- In-Memory technology (3)
- Lehrevaluation (3)
- Model Synchronisation (3)
- Model Transformation (3)
- Ph.D. Retreat (3)
- Sicherheit (3)
- Smalltalk (3)
- Studierendenaustausch (3)
- Tripel-Graph-Grammatik (3)
- Unterrichtseinheiten (3)
- Verifikation (3)
- Virtualisierung (3)
- curriculum framework (3)
- graph transformation (3)
- graph transformation systems (3)
- lesson evaluation (3)
- machine learning (3)
- maschinelles Lernen (3)
- openHPI (3)
- privacy (3)
- security (3)
- student exchange (3)
- teaching units (3)
- virtual machines (3)
- AUTOSAR (2)
- BPMN (2)
- Betriebssysteme (2)
- Bounded Model Checking (2)
- Cloud-Sicherheit (2)
- Cloud-Speicher (2)
- Data Integration (2)
- Design Thinking (2)
- Digitalisierung (2)
- Governance (2)
- Graphentransformationssysteme (2)
- Graphtransformationssysteme (2)
- Identitätsmanagement (2)
- Innovation (2)
- Java (2)
- Live-Programmierung (2)
- Lively Kernel (2)
- Model Synchronization (2)
- Model-Driven Engineering (2)
- Modeling (2)
- Modellprüfung (2)
- Privacy (2)
- Process Modeling (2)
- Prozessmodellierung (2)
- Psycholinguistik (2)
- Ressourcenoptimierung (2)
- Slumming (2)
- Syntax (2)
- SysML (2)
- Telekommunikation (2)
- Versionsverwaltung (2)
- Verwaltung (2)
- Virtuelle Maschinen (2)
- Werkzeuge (2)
- artifical intelligence (2)
- bounded model checking (2)
- cloud (2)
- continuous integration (2)
- cyber-physische Systeme (2)
- data profiling (2)
- debugging (2)
- digitalization (2)
- graph constraints (2)
- identity management (2)
- in-memory technology (2)
- incremental graph pattern matching (2)
- information structure (2)
- k-inductive invariant checking (2)
- kontinuierliche Integration (2)
- live programming (2)
- model checking (2)
- modeling (2)
- modellgetriebene Entwicklung (2)
- operating systems (2)
- probabilistische gezeitete Systeme (2)
- probabilistische zeitgesteuerte Systeme (2)
- slumming (2)
- smalltalk (2)
- syntax (2)
- township tourism (2)
- triple graph grammars (2)
- typed attributed graphs (2)
- verification (2)
- verschachtelte Graphbedingungen (2)
- version control (2)
- virtualization (2)
- virtuelle Maschinen (2)
- ACINQ (1)
- ARCH (1)
- ARIMA Models (1)
- ARMA Processes (1)
- ASIC (1)
- Abhängigkeiten (1)
- Absolute Advantage (1)
- Absoluter Kostenvorteil (1)
- Abstraktion von Geschäftsprozessmodellen (1)
- Adam Smith (1)
- Agile (1)
- Agilität (1)
- Akan (1)
- Aktienmarkt (1)
- Aktivitäten (1)
- Albania (1)
- Alterung (1)
- Ambiguity (1)
- Ambiguität (1)
- Analog-zu-Digital-Konvertierung (1)
- Andrei Konchalovsky (1)
- Andrej Končalovskij (1)
- Apriori (1)
- Arbeitsethik (1)
- Architektur (1)
- Armed Conflicts (1)
- Armenia ; Human Rights ; Minorities ; Education (1)
- Armut (1)
- Artem Erkomaishvili (1)
- Aspect-oriented Programming (1)
- Aspektorientierte Softwareentwicklung (1)
- Association Rule Mining (1)
- Assoziationsregeln (1)
- Asynchrone Schaltung (1)
- Asynchronous circuit (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Aufgabenerfüllung (1)
- Ausführung von Modellen (1)
- Australian securities exchange (1)
- Auswirkungen (1)
- Authentizität (1)
- Autocorrelation (1)
- Autokorrelation (1)
- Außenhandel (1)
- BCCC (1)
- BPM (1)
- BTC (1)
- Bahnwesen (1)
- Batchprozesse (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Bedingte Inklusionsabhängigkeiten (1)
- Behavior change (1)
- Beschränkungen und Abhängigkeiten (1)
- Beveridge-Nelson Decomposition (1)
- Bisimulation (1)
- BitShares (1)
- Bitcoin (1)
- Bitcoin Core (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Bottom-up (1)
- Bounded Backward Model Checking (1)
- Byzantine Agreement (1)
- Bürokratisierung (1)
- CEP (1)
- CSC (1)
- CSCW (1)
- Cape Town (1)
- Case studies (1)
- Celtic languages (1)
- Central Banking Policy (1)
- Change Management (1)
- Cloud (1)
- Co-Regulation (1)
- Colombia (1)
- Colored Coins (1)
- Comparative Advantage (1)
- Conditional Inclusion Dependency (1)
- Conformance Überprüfung (1)
- Constraints (1)
- Consumption (1)
- Context-oriented Programming (1)
- Continental Celtic (1)
- Contracts (1)
- Controller-Resynthese (1)
- Creative (1)
- Cyber-Physical Systems (1)
- Cyber-Physical-Systeme (1)
- Cyber-physical-systems (1)
- Cyber-physikalische Systeme (1)
- DAO (1)
- DPoS (1)
- DaZ (1)
- Data Dependency (1)
- Data Modeling (1)
- Data Profiling (1)
- Data Quality (1)
- Data Warehouse (1)
- Database Cost Model (1)
- Datenabhängigkeiten (1)
- Datenanalyse (1)
- Datenbank-Kostenmodell (1)
- Datenflusskorrektheit (1)
- Datenmodellierung (1)
- Datenqualität (1)
- Datensatz (1)
- Datenvertraulichkeit (1)
- Datenvisualisierung (1)
- Deadline-Verbreitung (1)
- Debugging (1)
- Decentralization (1)
- Decentralization in government (1)
- Dekubitus (1)
- Delegated Proof-of-Stake (1)
- Demokratisierung (1)
- Denkweise (1)
- Deutsch (1)
- Deutschunterricht (1)
- Dezentralisation (1)
- Dezentralisierung (1)
- Differential Privacy (1)
- Discrimination Networks (1)
- Distributed Proof-of-Research (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Domänen (1)
- Duplicate Detection (1)
- Duplikaterkennung (1)
- Dynamic Type System (1)
- Dynamische Typ Systeme (1)
- E-Learning (1)
- E-Wallet (1)
- ECDSA (1)
- EEG (1)
- EHR (1)
- EPA (1)
- EU (1)
- Echtzeit (1)
- Echtzeitsysteme (1)
- Effectiveness (1)
- Effektivität (1)
- Einnahmenautonomie (1)
- Elektronische Patientenakte (1)
- Energie (1)
- Ereignisse (1)
- Erfüllbarkeitsanalyse (1)
- Eris (1)
- Erkennen von Meta-Daten (1)
- Estonia (1)
- Ether (1)
- Ethereum (1)
- European Union (1)
- Europäische Union (1)
- Evolution (1)
- Evolution in MDE (1)
- Expenditure Assignment (1)
- Experimentelle Linguisitk (1)
- Extract-Transform-Load (ETL) (1)
- FRP (1)
- Fallstudie (1)
- Familie (1)
- Family (1)
- Federated Byzantine Agreement (1)
- Feedback Loops (1)
- Fehlersuche (1)
- Fehlertoleranz (1)
- Festschrift (1)
- Filmgeschichte (1)
- Finanzielle Performance (1)
- Finanzzuweisungen (1)
- Fiscal Federalism (1)
- Fiskalischer Föderalismus (1)
- FollowMyVote (1)
- Fork (1)
- Formale Verifikation (1)
- Frequenz (1)
- Friedensforschung (1)
- Functional Lenses (1)
- GARCH (1)
- Gas (1)
- Generalized Discrimination Networks (1)
- Genre Eastern (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- German (1)
- German lessons (1)
- Germany (1)
- Geschichte der Sprachwissenschaft (1)
- Geschäftsprozesse (1)
- Geschäftsprozessmanagement (1)
- Gesetze (1)
- GitHub (1)
- Graph-Constraints (1)
- Graph-basierte Suche (1)
- Graphbedingungen (1)
- Graphdatenbanken (1)
- Graphreparatur (1)
- Graphtransformation (1)
- Gridcoin (1)
- HENSHIN (1)
- HPI Schul-Cloud (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasso-Plattner-Institute (1)
- Hauptspeicherdatenbank (1)
- Hebrew (1)
- Heuristiken (1)
- Homomorphe Verschlüsselung (1)
- Human Rights (1)
- Häkeln (1)
- Ideation (1)
- Ideenfindung (1)
- Ideologien und Kino (1)
- Impact (1)
- Implementation in Organizations (1)
- Implementierung in Organisationen (1)
- In-Memory Database (1)
- In-Memory Datenbank (1)
- In-Memory-Datenbank (1)
- Individuen (1)
- Infinite State (1)
- Inflation (1)
- Information Extraction (1)
- Information Systems (1)
- Informationsextraktion (1)
- Informationsstruktur (1)
- Informationssysteme (1)
- Inkrementelle Graphmustersuche (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Inselkeltisch (1)
- Insular Celtic (1)
- Interdisciplinary Teams (1)
- Intergovernmental transfers (1)
- International trade (1)
- Internet Content Regulation (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Interpreter (1)
- Interval Timed Automata (1)
- Invariant-Checking (1)
- Invarianten (1)
- Invariants (1)
- Investment (1)
- IoT (1)
- Iroquoian languages (1)
- JCop (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Kapstadt (1)
- Katutura (1)
- Kausalität (1)
- Kette (1)
- Kind (1)
- Ko-Regulierung (1)
- Kognitionswissenschaften (1)
- Kollaborationen (1)
- Kolumbien (1)
- Komparativer Kostenvorteil (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- Konsensprotokolle (1)
- Konsistenzrestauration (1)
- Kontinentalkeltisch (1)
- Konvergenzrate (1)
- Kooperation (1)
- Korpuslinguistik (1)
- Kreativität (1)
- Kunstanalyse (1)
- Kunststofflichtwellenleiter (1)
- Künstliche Intelligenz (1)
- Langzeitverhalten (1)
- Lateinamerika (1)
- Latin America (1)
- Laufzeitanalyse (1)
- Laufzeitmodelle (1)
- Leadership (1)
- Leistungsmodelle von virtuellen Maschinen (1)
- Lesen (1)
- Lichtwellenleiter (1)
- Lightning Network (1)
- Linguistik (1)
- Link Discovery (1)
- Link-Entdeckung (1)
- Linked Data (1)
- Linked Open Data (1)
- Lock-Time-Parameter (1)
- Länder (1)
- Lösungsraum (1)
- MDE Ansatz (1)
- MDE settings (1)
- MERLOT (1)
- MOOC (1)
- MOOCs (1)
- Management (1)
- Mandarin (1)
- Markov processes (1)
- Markovprozesse (1)
- Measurement (1)
- Megamodell (1)
- Megamodels (1)
- Mehrkernsysteme (1)
- Mehrsprachigkeit (1)
- Membran (1)
- Menschenrechte (1)
- Messung (1)
- Metadata Discovery (1)
- Metadatenentdeckung (1)
- Metadatenqualität (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Middleware (1)
- Mikrofinanzinstitutionen (1)
- Mikrokredite (1)
- Mindset (1)
- Ministerialverwaltung (1)
- Mission-Drift (1)
- Missionarsgrammatik (1)
- Mobile Application Development (1)
- Model Execution (1)
- Modeling Languages (1)
- Modell-getriebene Softwareentwicklung (1)
- Modelle mit mehreren Versionen (1)
- Modellerzeugung (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Modellreparatur (1)
- Modelltransformationen (1)
- Models at Runtime (1)
- Money Supply (1)
- Morphic (1)
- Morphologie (1)
- Multi-Instanzen (1)
- Multicore architectures (1)
- Multidisciplinary Teams (1)
- Muster (1)
- Musterabgleich (1)
- NASDAQ (1)
- NameID (1)
- Namecoin (1)
- Namibia (1)
- Nested Graph Conditions (1)
- Network Enforcement Act (1)
- Netzwerkdurchsetzungsgesetz (1)
- Netzwerkprotokolle (1)
- Newspeak (1)
- Ngizim (1)
- Object Constraint Programming (1)
- Object-Oriented Programming (1)
- Objekt-Constraint Programmierung (1)
- Objekt-Orientiertes Programmieren (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Objektlebenszyklus-Synchronisation (1)
- Off-Chain-Transaktionen (1)
- Onename (1)
- Online Course (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- OpenBazaar (1)
- Oracles (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- P2P (1)
- POF (1)
- PRISM Modell-Checker (1)
- PRISM model checker (1)
- PTCTL (1)
- Pattern Matching (1)
- Patterns (1)
- Peace Studies (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Persistenz (1)
- Petri net Mapping (1)
- Petri net mapping (1)
- Petrinetz (1)
- Phonologie (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Polen (1)
- Politisierung (1)
- Posenabschätzung (1)
- Principal agent relation (1)
- Privatsphäre (1)
- Problem Solving (1)
- Problemlösung (1)
- Process (1)
- Process Enactment (1)
- Process Mining (1)
- Programmiererlebnis (1)
- Programmierung (1)
- Programming Languages (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Propagation von Aktivitätsinstanzzuständen (1)
- Property Prices (1)
- Prototyping (1)
- Prozess (1)
- Prozessausführung (1)
- Prozesserhebung (1)
- Prozessinstanz (1)
- Prozessoren (1)
- Psychologie (1)
- Psychology (1)
- Public Debt (1)
- Public-Private Partnerships (1)
- Python (1)
- Quanten-Computing (1)
- Quantitative Analysen (1)
- Raumkonstruktion (1)
- Regenerierung (1)
- Regressionstests (1)
- Regulierung (1)
- Regulierung von Internetinhalten (1)
- Religion (1)
- Research Projects (1)
- Revenue Autonomy (1)
- Reverse Engineering (1)
- Ripple (1)
- Rohstoffe (1)
- Ruby (1)
- Runtime Binding (1)
- Runtime-monitoring (1)
- Russia (1)
- Russland (1)
- SCP (1)
- SHA (1)
- SPV (1)
- SQL (1)
- STG decomposition (1)
- STG-Dekomposition (1)
- Sammlungsdatentypen (1)
- Savanne (1)
- Schemaentdeckung (1)
- Schlüsselentdeckung (1)
- Schreiben (1)
- Schreibfähigkeit (1)
- Schriftartgestaltung (1)
- Schriftrendering (1)
- Schriftsprache (1)
- Schriftspracherwerb (1)
- Schwierigkeitsgrad (1)
- Scrollytelling (1)
- Self-Adaptive Software (1)
- Sequenzeigenschaften (1)
- Sequenzen von s/t-Pattern (1)
- Serialisierung (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-orientierte Systme (1)
- Sierra Leone (1)
- Signalflankengraph (SFG oder STG) (1)
- Simplified Payment Verification (1)
- Simulation (1)
- Skalierbarkeit der Blockchain (1)
- Slock.it (1)
- Slovakia (1)
- Slowakei (1)
- Slumtourismus (1)
- SoaML (1)
- Soft Fork (1)
- Software/Hardware Co-Design (1)
- Softwarearchitektur (1)
- Softwareproduktlinien (1)
- Softwaretests (1)
- Solution Space (1)
- South Africa (1)
- Soziale Ziele (1)
- Sozialen Medien (1)
- Spectral Density (1)
- Speicheroptimierungen (1)
- Spektraldichte (1)
- Spezifikation von gezeiteten Graph Transformationen (1)
- Spitzenbeamte (1)
- Sprachförderung (1)
- Sprachkontakt (1)
- Sprachspezifikation (1)
- Sprachverarbeitung (1)
- Squeak (1)
- Standardisierung (1)
- Standards (1)
- Stationary Stochastic Processes (1)
- Stationärer Prozess (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Stimulus-Onset Asynchrony (1)
- Stock Prices (1)
- Storj (1)
- Streuung (1)
- Studie (1)
- Synchronisation (1)
- System of Systems (1)
- Systemsoftware (1)
- Südafrika (1)
- Tableaumethode (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Telemedizin (1)
- Temperatur (1)
- Temporallogik (1)
- Testergebnisse (1)
- Testpriorisierungs (1)
- The Bitfury Group (1)
- The DAO (1)
- Threshold Cryptography (1)
- Time Series Analysis (1)
- Timed Automata (1)
- Tools (1)
- Top-down (1)
- Tourismus (1)
- Township (1)
- Township Tourismus (1)
- Townshiptourismus (1)
- Trajektorien (1)
- Transaktion (1)
- Transformation (1)
- Transformationsebene (1)
- Transformationssequenzen (1)
- Transitional Justice (1)
- Transitions (1)
- Transparency (1)
- Transparenz (1)
- Transtional Justice (1)
- Travis CI (1)
- Tripel-Graph-Grammatiken (1)
- Triple Graph Grammar (1)
- Triple Graph Grammars (1)
- Triple-Graph-Grammatiken (1)
- Turkish (1)
- Two-Way-Peg (1)
- Türkisch (1)
- Unbegrenzter Zustandsraum (1)
- United Nations (1)
- Unspent Transaction Output (1)
- Unveränderlichkeit (1)
- VUCA-World (1)
- Vector Error Correction Model (1)
- Verbindungsnetzwerke (1)
- Verbkomplexe (1)
- Verbsyntax (1)
- Verbzweit (1)
- Vereinte Nationen (1)
- Vergleich Filmkulturen (1)
- Verhaltensabstraktion (1)
- Verhaltensbewahrung (1)
- Verhaltensverfeinerung (1)
- Verhaltensänderung (1)
- Verhaltensäquivalenz (1)
- Verification (1)
- Verlässlichkeit (1)
- Versöhnung (1)
- Verteilungsalgorithmen (1)
- Verteilungsalgorithmus (1)
- Verträge (1)
- Verwaltungsreform (1)
- Verzögerungs-Verbreitung (1)
- Vesikel (1)
- Vietnamese (1)
- Virtual machines (1)
- Visualisierung (1)
- Visualisierungskonzept-Exploration (1)
- Vorhersagbarkeit (1)
- Warlpiri (1)
- Wartung von Graphdatenbanksichten (1)
- Watson IoT (1)
- Web applications (1)
- Web-Anwendungen (1)
- Wicked Problems (1)
- Wikipedia (1)
- Windhoek (1)
- Wissenszirkulation (1)
- Worterkennung (1)
- Wüstenbildung (1)
- Zeitreihenanalyse (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- Zugriffskontrolle (1)
- Zweitsprache (1)
- academic leadership (1)
- acceptability study (1)
- access control (1)
- activity instance state propagation (1)
- adaptive Systeme (1)
- adaptive systems (1)
- administration (1)
- adoption (1)
- agil (1)
- agricultural policy (1)
- altchain (1)
- alternative chain (1)
- alumni work (1)
- analog-to-digital conversion (1)
- apriori (1)
- architecture (1)
- art analysis (1)
- asset management (1)
- atomic swap (1)
- ausführbare Semantiken (1)
- authenticity (1)
- batch processing (1)
- behavior preservation (1)
- behavioral abstraction (1)
- behavioral equivalenc (1)
- behavioral refinement (1)
- benutzergenerierte Inhalte (1)
- beschreibende Feldstudie (1)
- bewaffnete Konflikte (1)
- bidirectional payment channels (1)
- big data services (1)
- bilingualism (1)
- bisimulation (1)
- bitcoin (1)
- bitcoins (1)
- blockchain (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- bottom-up (1)
- bounded backward model checking (1)
- buraucratisation (1)
- business process management (1)
- business process model abstraction (1)
- business processes (1)
- capacity building (1)
- case study (1)
- causality (1)
- chain (1)
- change management (1)
- child (1)
- circulation of knowledge (1)
- cloud security (1)
- cloud storage (1)
- cognitive sciences (1)
- collaboration (1)
- collection types (1)
- comparative urban studies (1)
- compositional analysis (1)
- computational ethnomusicology (1)
- computer vision (1)
- computer-aided design (1)
- computergestützte Musikethnologie (1)
- confidentiality (1)
- confirmation period (1)
- conflict management (1)
- conformance checking (1)
- congruence (1)
- consensus algorithm (1)
- consensus protocol (1)
- consensus protocols (1)
- consistency restoration (1)
- contest period (1)
- continuous testing (1)
- contracts (1)
- control resynthesis (1)
- controlled experiment (1)
- convolutional neural networks (1)
- cooperation (1)
- core executive (1)
- corporate governance (1)
- crochet (1)
- cross-chain (1)
- cultural heritage (1)
- cyber-physikalische Systeme (1)
- data center management (1)
- data flow correctness (1)
- data integration (1)
- data set (1)
- data visualization (1)
- deadline propagation (1)
- decentral identities (1)
- decentralized autonomous organization (1)
- decubitus (1)
- deep learning (1)
- delay propagation (1)
- demografische Informationen (1)
- demographic information (1)
- dependability (1)
- dependable computing (1)
- dependencies (1)
- desertification (1)
- design thinking (1)
- dezentrale Identitäten (1)
- dezentrale autonome Organisation (1)
- differential privacy (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital education (1)
- digital enlightenment (1)
- digital learning platform (1)
- digital picture archive (1)
- digital sovereignty (1)
- digitale Aufklärung (1)
- digitale Bildung (1)
- digitale Lernplattform (1)
- digitale Souveränität (1)
- digitales Bildarchiv (1)
- direct manipulation (1)
- direkte Manipulation (1)
- discrete-event model (1)
- discrimination networks (1)
- diskretes Ereignismodell (1)
- distributed performance monitoring (1)
- distribution algorithm (1)
- domains (1)
- doppelsemigroup (1)
- doppelter Hashwert (1)
- double hashing (1)
- dynamic typing (1)
- dynamic programming languages (1)
- dynamic systems (1)
- dynamische Programmiersprachen (1)
- dynamische Sprachen (1)
- dynamische Systeme (1)
- early modern manuscript culture (1)
- econometric modelling (1)
- economic disparities (1)
- economy (1)
- eindeutig (1)
- electronic health record (1)
- energy (1)
- ereigniskorrelierte Potentiale (1)
- erfahrbare Medien (1)
- ergodic rates (1)
- event-related potentials (1)
- events (1)
- evolution in MDE (1)
- evolutionary economics (1)
- executable semantics (1)
- exploratives Programmieren (1)
- exploratory programming (1)
- fault tolerance (1)
- federated voting (1)
- feedback loops (1)
- fehlende Daten (1)
- festschrift (1)
- film cultures in comparison (1)
- financial performance (1)
- firm behaviour (1)
- fiscal policy (1)
- fluctuation (1)
- focus particle (1)
- focus sensitive expressions (1)
- font engineering (1)
- font rendering (1)
- foreign direct investment (1)
- formal semantics (1)
- formal verification (1)
- formal verification methods (1)
- formale Verifikation (1)
- free algebra (1)
- frequency (1)
- frühneuzeitliche Manuskriptkultur (1)
- functional dependency (1)
- functional lenses (1)
- functional programming (1)
- funktionale Abhängigkeit (1)
- funktionale Programmierung (1)
- future SOC lab (1)
- gas (1)
- gefaltete neuronale Netze (1)
- generalized discrimination networks (1)
- gesture (1)
- getypte Attributierte Graphen (1)
- global model management (1)
- globales Modellmanagement (1)
- governance (1)
- government (1)
- grammaticalization (1)
- graph databases (1)
- graph queries (1)
- graph repair (1)
- graph transformations (1)
- hashrate (1)
- heuristics (1)
- higher education management (1)
- history of cinema (1)
- history of linguistics (1)
- homomorphic encryption (1)
- human language processing (1)
- human-centered (1)
- hybrid graph-transformation-systems (1)
- hybride Graph-Transformations-Systeme (1)
- ideologies and cinema (1)
- immutable values (1)
- in-memory database (1)
- individuals (1)
- inductive invariant checking (1)
- induktives Invariant Checking (1)
- inkrementelles Graph Pattern Matching (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- integrated development environments (1)
- integrierte Entwicklungsumgebungen (1)
- intelligente Verträge (1)
- inter-chain (1)
- interactive media (1)
- interaktive Medien (1)
- interassociativity (1)
- interconnect (1)
- interdisziplinäre Teams (1)
- interest group (1)
- interface between grammar and information structure (1)
- interpreters (1)
- interval probabilistic timed systems (1)
- interval probabilistische zeitgesteuerte Systeme (1)
- interval timed automata (1)
- intuitive Benutzeroberflächen (1)
- intuitive interfaces (1)
- invariant checking (1)
- investment climate (1)
- irokesische Sprachen (1)
- juridical recording (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariants (1)
- k-induktive Invarianten (1)
- k-induktive Invariantenprüfung (1)
- k-induktives Invariant-Checking (1)
- keltische Sprachen (1)
- key discovery (1)
- kompositionale Analyse (1)
- kontinuierliches Testen (1)
- kontrolliertes Experiment (1)
- kulturelles Erbe (1)
- künstliche Intelligenz (1)
- language contact (1)
- language specification (1)
- law (1)
- leadership (1)
- lebenslanges Lernen (1)
- lebenszentriert (1)
- ledger assets (1)
- left periphery (1)
- left recursion (1)
- lexical databases (1)
- lexikalische Datenbanken (1)
- life-centered (1)
- lifelong learning (1)
- linguistics (1)
- local jurisdictions (1)
- location-based (1)
- long-time behaviour (1)
- management (1)
- many-core (1)
- maschinelles Sehen (1)
- mehrdimensionale Belangtrennung (1)
- mehrsprachige Ausführungsumgebungen (1)
- membrane (1)
- memory optimization (1)
- menschenzentriert (1)
- menschliche Sprachverarbeitung (1)
- merged mining (1)
- merkle root (1)
- metadata discovery (1)
- metadata quality (1)
- methodology (1)
- metric temporal logic (1)
- metric termporal graph logic (1)
- metrisch temporale Graph Logic (1)
- metrische Temporallogik (1)
- micro loans (1)
- microfinance institutions (1)
- micropayment (1)
- micropayment channels (1)
- middleware (1)
- migration (1)
- miner (1)
- mining (1)
- mining hardware (1)
- ministry of agriculture (1)
- minting (1)
- missing data (1)
- mission drift (1)
- missionary grammar (1)
- modality (1)
- model generation (1)
- model repair (1)
- model transformation (1)
- model-driven engineering (1)
- modelling optical fibres waveguides pof scattering temperature aging ageing (1)
- monitoring (1)
- morphic (1)
- morphology (1)
- multi-core (1)
- multi-dimensional separation of concerns (1)
- multi-instances (1)
- multi-version models (1)
- multidisziplinäre Teams (1)
- multiethnolect (1)
- multilingualism (1)
- musical scales (1)
- musikalische Tonleitern (1)
- nested application conditions (1)
- network protocols (1)
- neue Institutionentheorie (1)
- new institutional theory (1)
- non-manuals (1)
- nonce (1)
- object life cycle synchronization (1)
- object-constraint programming (1)
- object-oriented programming (1)
- objektorientiertes Programmieren (1)
- off-chain transaction (1)
- optische Fasern (1)
- organizational change (1)
- orts-basiert (1)
- packrat parsing (1)
- parallel and sequential independence (1)
- parallel computing (1)
- parallele und Sequentielle Unabhängigkeit (1)
- paralleles Rechnen (1)
- parliament (1)
- parsing expression grammars (1)
- partial application conditions (1)
- partielle Anwendungsbedingungen (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- performance models of virtual machines (1)
- periodic tasks (1)
- periodische Aufgaben (1)
- persistence (1)
- petri net (1)
- planning (1)
- policy (1)
- political opportunism (1)
- politicisation (1)
- polyglot execution environments (1)
- pose estimation (1)
- poverty (1)
- predictability (1)
- probabilistic timed automata (1)
- probabilistische zeitbehaftete Automaten (1)
- process elicitation (1)
- process instance (1)
- process mining (1)
- processor hardware (1)
- professionalization (1)
- profiling (1)
- programming (1)
- programming experience (1)
- prototyping (1)
- psycholinguistics (1)
- qualitative model (1)
- qualitatives Modell (1)
- quantum computing (1)
- quorum slices (1)
- railways (1)
- random point processes (1)
- reactive (1)
- reading (1)
- reaktive Programmierung (1)
- real-time (1)
- real-time systems (1)
- rechnerunterstütztes Konstruieren (1)
- reconciliation (1)
- regional development (1)
- regional integration (1)
- regions (1)
- regression testing (1)
- relational model transformation (1)
- relationale Modelltransformationen (1)
- religion (1)
- resource optimization (1)
- resources (1)
- reverse engineering (1)
- rootstock (1)
- runtime adaptations (1)
- runtime monitoring (1)
- s/t-pattern sequences (1)
- satisfiabilitiy solving (1)
- savanna (1)
- scalability of blockchain (1)
- scarce tokens (1)
- schema discovery (1)
- schrumpfende Städte (1)
- scrollytelling (1)
- second language (1)
- selbstbestimmte Identitäten (1)
- self-sovereign identity (1)
- semantic fieldwork (1)
- semantics preservation (1)
- semigroup (1)
- sequence properties (1)
- serialization (1)
- service-oriented systems (1)
- shareholders (1)
- sidechain (1)
- sign languages (1)
- signal transition graph (1)
- simulation (1)
- slum tourism (1)
- small talk (1)
- smart contracts (1)
- social goals (1)
- software architecture (1)
- software product lines (1)
- software tests (1)
- software/hardware co-design (1)
- spatial construction (1)
- specification of timed graph transformations (1)
- speed independent (1)
- squeak (1)
- standardization (1)
- standards (1)
- state and local budgets (1)
- static analysis (1)
- static source-code analysis (1)
- statische Analyse (1)
- statische Quellcodeanalyse (1)
- statistical mechanics (1)
- stimulus-onset asynchrony (1)
- stochastic Petri nets (1)
- stochastic analysis (1)
- stochastische Petri Netze (1)
- stock market (1)
- study (1)
- sustainable development (1)
- symbolic analysis (1)
- symbolic graphs (1)
- symbolische Analyse (1)
- symbolische Graphen (1)
- synchronization (1)
- system of property rights (1)
- system of systems (1)
- systems software (1)
- t.BPM (1)
- tableau method (1)
- tangible media (1)
- tax distribution (1)
- tele-TASK (1)
- telemedicine (1)
- temporal logic (1)
- test case prioritization (1)
- test results (1)
- threshold cryptography (1)
- tiefes Lernen (1)
- timed automata (1)
- tools (1)
- top bureaucrats (1)
- top-down (1)
- tourism (1)
- township (1)
- traditional Georgian music (1)
- traditionelle Georgische Musik (1)
- trajectories (1)
- transaction (1)
- transaction costs (1)
- transformation (1)
- transformation level (1)
- transformation sequences (1)
- transformative justice (1)
- transitional justice (1)
- typed graph transformation systems (1)
- typisierte attributierte Graphen (1)
- typology (1)
- understudied languages (1)
- unique (1)
- urban decline (1)
- urban regeneration (1)
- user-generated content (1)
- vergleichende Stadtforschung (1)
- verifiable credentials (1)
- verschachtelte Anwednungsbedingungen (1)
- verschachtelte Anwendungsbedingungen (1)
- verteilte Leistungsüberwachung (1)
- verzwickte Probleme (1)
- vesicle (1)
- view maintenance (1)
- visual language (1)
- visual word recognition (1)
- visualization (1)
- visualization concept exploration (1)
- visuelle Sprache (1)
- visuelle Worterkennung (1)
- web-applications (1)
- web-based development (1)
- web-based development environment (1)
- web-basierte Entwicklungsumgebung (1)
- webbasierte Entwicklung (1)
- word recognition (1)
- work ethics (1)
- writing ability (1)
- written language acquisition (1)
- zuverlässige Datenverarbeitung (1)
- zuverlässigen Datenverarbeitung (1)
- Öffentlich-Privater-Partnershaften (1)
- Übergangsjustiz (1)
- Übergangsprozesse (1)
- Überwachung (1)
- überprüfbare Nachweise (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (82)
- Wirtschaftswissenschaften (44)
- Hasso-Plattner-Institut für Digital Engineering GmbH (33)
- Extern (13)
- Department Linguistik (12)
- Sozialwissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (8)
- Institut für Mathematik (4)
- Sonderforschungsbereich 632 - Informationsstruktur (4)
- Department Psychologie (3)
Proceedings of KogWis 2010 : 10th Biannual Meeting of the German Society for Cognitive Science
(2010)
As the latest biannual meeting of the German Society for Cognitive Science (Gesellschaft für Kognitionswissenschaft, GK), KogWis 2010 at Potsdam University reflects the current trends in a fascinating domain of research concerned with human and artificial cognition and the interaction of mind and brain. The Plenary talks provide a venue for questions of the numerical capacities and human arithmetic (Brian Butterworth), of the theoretical development of cognitive architectures and intelligent virtual agents (Pat Langley), of categorizations induced by linguistic constructions (Claudia Maienborn), and of a cross-level account of the “Self as a complex system“ (Paul Thagard). KogWis 2010 integrates a wealth of experimental research, cognitive modelling, and conceptual analysis in 5 invited symposia, over 150 individual talks, 6 symposia, and more than 40 poster contributions. Some of the invited symposia reflect local and regional strenghts of research in the Berlin-Brandenburg area: the two largests research fields of the university Cognitive Sciences Area of Excellence in Potsdam are represented by an invited symposium on “Information Structure” by the Special Research Area 632 (“Sonderforschungsbereich”, SFB) of the same name, of Potsdam University and Humboldt-University Berlin, and by a satellite conference of the research group “Mind and Brain Dynamics”. The Berlin School of Mind and Brain at Humboldt-University Berlin takes part with an invited symposium on “Decision Making” from a perspective of cognitive neuroscience and philosophy and the DFG Cluster of Excellence “Languages of Emotion” of Free University presents interdisciplinary research results in an invited symposium on “Symbolising Emotions”.
Unique column combinations of a relational database table are sets of columns that contain only unique values. Discovering such combinations is a fundamental research problem and has many different data management and knowledge discovery applications. Existing discovery algorithms are either brute force or have a high memory load and can thus be applied only to small datasets or samples. In this paper, the wellknown GORDIAN algorithm and "Apriori-based" algorithms are compared and analyzed for further optimization. We greatly improve the Apriori algorithms through efficient candidate generation and statistics-based pruning methods. A hybrid solution HCAGORDIAN combines the advantages of GORDIAN and our new algorithm HCA, and it significantly outperforms all previous work in many situations.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
After promising beginnings towards transformation, in 1991 the Bulgarian economy fell into deep crisis in the period from 1995 to 1997. Social policy, already overstrained due to the demands of transition, was unable to cope effectively with the rapidly spreading state of emergency. The following essay analyses the development of the social indicators and instruments of social security in the years 1990 to 1998. In addition to unemployment and unemployment insurance, the issue of pensions and poverty will also be examined.
Privatisation in Central and Eastern Europe can be defined as the transfer of property rights from the State to private owners. The transfers are carried out so as to vest the new private owners with the full property rights of use and disposal over their property, these rights being guaranteed by the legal framework established by the rule of law. In Bulgaria, one can distinguish between three main stages in the process of privatisation. Each was shaped by the conflicting resolutions of frequently changing governments and meant to serve different political goals. The first stage (1990-1993) is characterised by the blockade of legal privatisation, as ‘spontaneous privatisation’ was accorded high priority. As in other former socialist countries, great emphasis was placed on the so-called commercialisation of state-owned enterprises. This did not involve the actual transfer of State property into private hands, but rather the independent transformation of state-owned enterprises into joint-stock companies, as well as the establishment of subsidiary companies.1 The goals of introducing more efficient structures and applying modern methods of production by transferring property to a more suitable management were not achieved. The second stage (1993-1995) is a cash privatisation, which laid the foundation for an employee/management buy-out, aided by the legal provisions granting concessions in the payment of instalments. The most important factor in the third stage of the process of privatisation in Bulgaria was the adoption of the mass privatisation model as an alternative method of procedure. In 1996, legal regulations for mass privatisation were introduced and a privatisation fund was established. In the meantime, the process has evolved into its fourth stage, during which a strategy of privatisation has been formulated under the supervision of a monetary council, and various agreements with the IMF and the World Bank are being adhered to. Privatisation is the decisive factor in the structural reforms of East European countries. The problem of converting State property into more effective forms of property management has been exacerbated by the additional demand of carrying out the far-reaching structural changes as swiftly as possible. The expectation that a large part of State property would be privatised within a short time in Bulgaria, has not been met for a number of reasons. When the reforms began, the private sector was too weakly developed to become a catalyst for structural changes. Until 1995 there were no laws regulating the stock exchange or securities and bonds - the capital market was practically non-existent. Moreover, the various political parties could not agree upon the various models and objectives of privatisation. The population itself had no capital. The restitution of private ownership which will not be discussed in further detail was limited to the smallest businesses, traders and workshops. Furthermore, the Privatisation Agency and State authorities employed to initiate the privatisation process lacked experience. Another problem hindering privatisation was that the laws passed lacked precision and were constantly subject to change.
Inhalt: Grundgedanken zur Entwicklung von Leitbildern -Leitbilder im Kontext eines Stadtmarketingkonzeptes -Ein Modell zur Entwicklung von Leitbildern -Das Leitbild als ein Element der Entwicklung eines Stadtmarketing- Konzepts -Funktion von Leitbildern -Anforderungen an Leitbilder Beispiele zur Leitbildentwicklung für die Städte Hennigsdorf und Potsdam
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
Modular and incremental global model management with extended generalized discrimination networks
(2023)
Complex projects developed under the model-driven engineering paradigm nowadays often involve several interrelated models, which are automatically processed via a multitude of model operations. Modular and incremental construction and execution of such networks of models and model operations are required to accommodate efficient development with potentially large-scale models. The underlying problem is also called Global Model Management.
In this report, we propose an approach to modular and incremental Global Model Management via an extension to the existing technique of Generalized Discrimination Networks (GDNs). In addition to further generalizing the notion of query operations employed in GDNs, we adapt the previously query-only mechanism to operations with side effects to integrate model transformation and model synchronization. We provide incremental algorithms for the execution of the resulting extended Generalized Discrimination Networks (eGDNs), as well as a prototypical implementation for a number of example eGDN operations.
Based on this prototypical implementation, we experiment with an application scenario from the software development domain to empirically evaluate our approach with respect to scalability and conceptually demonstrate its applicability in a typical scenario. Initial results confirm that the presented approach can indeed be employed to realize efficient Global Model Management in the considered scenario.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Data obtained from foreign data sources often come with only superficial structural information, such as relation names and attribute names. Other types of metadata that are important for effective integration and meaningful querying of such data sets are missing. In particular, relationships among attributes, such as foreign keys, are crucial metadata for understanding the structure of an unknown database. The discovery of such relationships is difficult, because in principle for each pair of attributes in the database each pair of data values must be compared. A precondition for a foreign key is an inclusion dependency (IND) between the key and the foreign key attributes. We present with Spider an algorithm that efficiently finds all INDs in a given relational database. It leverages the sorting facilities of DBMS but performs the actual comparisons outside of the database to save computation. Spider analyzes very large databases up to an order of magnitude faster than previous approaches. We also evaluate in detail the effectiveness of several heuristics to reduce the number of necessary comparisons. Furthermore, we generalize Spider to find composite INDs covering multiple attributes, and partial INDs, which are true INDs for all but a certain number of values. This last type is particularly relevant when integrating dirty data as is often the case in the life sciences domain - our driving motivation.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Service-oriented modeling employs collaborations to capture the coordination of multiple roles in form of service contracts. In case of dynamic collaborations the roles may join and leave the collaboration at runtime and therefore complex structural dynamics can result, which makes it very hard to ensure their correct and safe operation. We present in this paper our approach for modeling and verifying such dynamic collaborations. Modeling is supported using a well-defined subset of UML class diagrams, behavioral rules for the structural dynamics, and UML state machines for the role behavior. To be also able to verify the resulting service-oriented systems, we extended our former results for the automated verification of systems with structural dynamics [7, 8] and developed a compositional reasoning scheme, which enables the reuse of verification results. We outline our approach using the example of autonomous vehicles that use such dynamic collaborations via ad-hoc networking to coordinate and optimize their joint behavior.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
Industrial policy and social strategy at the corporate level in Poland : questionnaire results
(1999)
This paper presents results from a survey of industrial policy of the state and the social security system at the corporate level in Poland. Previous reports in this area indicated preferable directions of research to be taken in order to prove various hypotheses of the purposefulness of an integral approach to industrial policy and social security in the analysis of economic processes in transition (see Weikard 1997). This paper summarises the results and draws conclusions from a questionnaire study on subsidies, social benefits and economic policy in Polish firms during the process of transformation. Our results and conclusions show the scope and character of the processes in the area of industrial and social policy in the period 1994 to 1997. The paper is divided into five parts. The first part concerns the aims and methodology of the questionnaire; it also gives a brief description of the sample. The second part shows how enterprises dealt with the issues of employment and wages in this period. The third part characterises industrial policy at the corporate level, while the next presents results from the survey of various social schemes pursued. The final part aims at an integral approach in the analysis of various processes taking place in Polish enterprises. The survey was conducted in the period April to June 1998. Its aim was to observe certain phenomena occurring at the corporate level. The questionnaire was distributed among the managers, directors and presidents of large-size enterprises, which had been selected to satisfy the following three criteria. Firstly, the number of employees had to be considerable (over 300 workers). This criterion was applied following the consideration that certain social phenomena are more conspicuous in enterprises with large manpower. Secondly, only operating enterprises were selected, the enterprises which closed down were disregarded. Finally, for the purposes of the survey the units differed as regards their legal situation and form of ownership. Out of over 1800 enterprises 370 units were drawn where we sent the questionnaire. Unfortunately, as many as 51.9% of the respondents refused co-operation, questions to a certain extent puts the representativeness of the sample in question. Finally, 178 questionnaires were subsequently completed and returned for analysis. However, not all of these questionnaires included full answers to all of the 75 questions; therefore, while discussing the results of the survey we have indicated the number of relevant answers we have received.
Graph queries have lately gained increased interest due to application areas such as social networks, biological networks, or model queries. For the relational database case the relational algebra and generalized discrimination networks have been studied to find appropriate decompositions into subqueries and ordering of these subqueries for query evaluation or incremental updates of query results. For graph database queries however there is no formal underpinning yet that allows us to find such suitable operationalizations. Consequently, we suggest a simple operational concept for the decomposition of arbitrary complex queries into simpler subqueries and the ordering of these subqueries in form of generalized discrimination networks for graph queries inspired by the relational case. The approach employs graph transformation rules for the nodes of the network and thus we can employ the underlying theory. We further show that the proposed generalized discrimination networks have the same expressive power as nested graph conditions.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.
Die vorliegende Arbeit stellt eine kritische Übersicht über den Forschungsstand zu multiplen Wh-Konstruktionen im Slavischen dar. Das Ziel ist es, die Unklarheit der Datenlage und die Widersprüchlichkeit der auf solchen "unklaren" Daten basierten Theorien aufzuzeigen. Inhalt: Historischer Hintergrund (Wachowicz 1974) Einige ältere Ansätze Höhepunkt: die folgenschwere Arbeit von Rudin (1988) Probleme: - Das Problem der Zuverlässlichkeit von Daten - Das Problem der Relevanz von Daten "Harte" Fakten: - Strikte Superioritätseffekte im Bulgarischen - Obligatorische Wh-Anhebung im Slavischen Neuere Ansätze: - "Qualitative" Ansätze - "Quantitative" Ansätze - Alternative Ansätze
TripleA is a workshop series founded by linguists from the University of Tübingen and the University of Potsdam. Its aim is to provide a forum for semanticists doing fieldwork on understudied languages, and its focus is on languages from Africa, Asia, Australia and Oceania. The second TripleA workshop was held at the University of Potsdam, June 3-5, 2015.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
New survey data for a panel of Polish firms is used to estimate employment and wage adjustments under various forms of ownership (insider vs. outsider) and asymmetric response to exogenous shocks. In contrast to earlier studies, dynamic panel data estimators (GMM) allow for endogeneity of observed variables and partial adjustment to shocks. Results differ from other findings in the transition literature: wages have little effect on dynamic labor demand and the firm-size wage effect is confirmed. Firms that expand employment have to pay significantly larger wage increases and rising sales add little to employment, suggesting labor hoarding. Dec1ining sales, however, significantly reduce employment and privatization (or anticipation thereof) has the expected benefits.
Privatisation and ownership : the impact on firms in transition survey evidence from Bulgaria
(1999)
Previous papers in this Special Series, have described in detail the theoretical background and development patterns, along with some empirical results, for the privatisation processes in Bulgaria and Poland. A range of issues have been raised which demand closer empirical investigation. For this purpose, the research group has developed questionnaire studies for Bulgaria and Poland. In Bulgaria, the National Statistical Institute (NSI) carried out the case studies between February and April 1998. The problems of the questionnaire set-up were identified in apre-test study, but unlike the Polish case, they led to only minor differentiation. Since financial limitations prevented a larger sample size, a sample size of 61 mid-sized and large Bulgarian enterprises was selected. Failure to respond was not a serious problem, unlike with the Polish questionnaire; this is because the NSI has maintained good links to the enterprise sector and management were prepared to give detailed answers, even on questions of their firms' financial status. However, as the Polish experience suggests, it has become obvious that the privatisation process is also associated with management's increasing reluctance to answer comparatively 'intimate' questions. Thus, future questionnaire studies must take a much higher rate of refusals into consideration. The pre-selection procedure in Bulgaria was determined by the project target, which sought to analyse the effects of the privatisation process on firm' s behaviour during the transition process, and hence only firms which had already existed before the changes were included. For small and medium-size enterprises (SME's), most of which were founded after the changes, partly due to the legal processes of spontaneous privatisation, some empirical, as weIl as analytical, studies were carried out. Thus, the research group limited the scope of investigation to enterprises with more than 250 employees. The underlying hypothesis is that employment problems are concentrated in larger firms, in particular amongst those still (partly) state owned. Because of the former ownership structures and relatively slower capacity for management change, the assumption is that state-owned enterprises (SOE's) which have only been recently privatised might still have traditional links to government even after privatisation. On the one hand, the SME's are obviously more prone to, and linked with, market processes. As a result, they don't have the financial potential and incentives to follow job-hoarding strategies. On the other hand, there are almost no SME's which are still stateowned. Hence, the prevailing opinion in the literature is that 'larger industrial firms were apt to be least efficient, most often producing inadequate and non-competitive products, with a high degree ofunder-utilisation oflabour and most inflexible to change' (lones & Nikolov 1997, p. 252). Thus, as mentioned above, though there may be some limitations with regard to firm representation, our sample characterises a number of enterprises that offer fertile ground for the analysis of firms' adjustment to the newly established market realities in a transition economy. Our study is unique in the sense that existing empirical studies on privatisation and enterprise restructuring generally cover the time period just before and after the initial stages of transition, e.g. 1988/89 to 1992. In those studies, samples of firms in the Czech Republic, Poland, Hungary and Bulgaria recognise that behavioural adaptations at the enterprise level had taken place just before the actual privatisation process materialised. Therefore, almost all of the firms under examination were still state-owned. The firms were usually divided according to their performance as 'good', 'average' and 'bad' enterprises. The main findings of those early studies have shown that the macroeconomic adaptations (i.e., macro-level changes which induced micro-level adjustment by the firms), as well as emerging market structures, have created enormous pressures which in turn have influenced firms' economic behaviour, reallocation of resources and consequent restructuring. This evidence supports the hypothesis that the SOE's started restructuring and adjusting their behaviour and performance, in response to the harsh realities of more open markets, before privatisation actually started. In this paper, we seek to present some results on these developments in Bulgaria, at the later stages of transition and privatisation (1992-1996). The aim of our questionnaire study is therefore to show the effects of the privatisation process and ownership on the behavioural adaptations of firms which had once been state-owned or continue to be owned by the state. The period under investigation is 1992 to 1996. For 1990 and 1991, the number of missing values is reactively high and, where relevant, we partly exclude these observations from our analysis. The paper contains seven sections. Section 11 outlines the macroeconomic environment in which our sample firms operate, provides some specifics of the Bulgarian privatisation process, and discusses data quality. Section 111 concentrates on the analysis of privatisation, the specific forms of ownership that resulted from it, and firm size. In Section IV, we describe the trends of the main economic variables within firms (such as employment, wages, labour productivity, etc), and a number of proxies of firm viability, while Section V presents some regression results to corroborate the discussion of the previous section. Section VI gives an overview of survey results of the impact of enterprise determined wage policy, trade union activity and membership, government control, and social benefits on enterprise restructuring. Section VII is a summary of our findings.
In socialist economies firms have provided various social benefits, like child care, health care, food subsidies, housing etc. Using panel data from Bulgarian and Polish firms, this paper attempts to explain firm-specific provision of social benefits in the process of transition. We investigate empirically with the help of qualitative response models, how ownership type and structure, firm size, profitability, change in management, foreign direct investment, wage and employment policies, union involvement and employee power have impacted the state of non-wage benefits provision.
This article examines the multiple governments of independent Estonia since 1992 referring to their stability. Confronted with the immense problems of democratic transition, the multi-party governments of Estonia change comparatively often. Following the elections of March 2003 the ninth government since 1992 was formed. A detailed examination of government stability and the example of Estonia is accordingly warranted, given that the country is seen as the most successful Central Eastern European transition country in spite of its frequent changes of government. Furthermore, this article questions whether or not internal government stability can exist within a situation where the government changes frequently. What does stability of government mean and what are the varying multi-faceted depths of the term? Before analysing the term, it has to be clarified and defined. It is presumed that government stability is composed of multiple variables influencing one another. Data about the average tenure of a government is not very conclusive. Rather, the deeper political causes for governmental change need to be examined. Therefore, this article discusses the conceptual and theoretical basics of governmental stability first. Secondly, it discusses the Estonian situation in detail up to the elections of 2003, including a short review of the 9th government since independence. In the conclusion, the author explains whether or not the governments of Estonia are stable. In the appendix, the reader finds all election results and also a list of all previous ministers of Estonian governments (all data are as of July 2002).
In reading, word frequency is commonly regarded as the major bottom-up determinant for the speed of lexical access. Moreover, language processing depends on top-down information, such as the predictability of a word from a previous context. Yet, however, the exact role of top-down predictions in visual word recognition is poorly understood: They may rapidly affect lexical processes, or alternatively, influence only late post-lexical stages. To add evidence about the nature of top-down processes and their relation to bottom-up information in the timeline of word recognition, we examined influences of frequency and predictability on event-related potentials (ERPs) in several sentence reading studies. The results were related to eye movements from natural reading as well as to models of word recognition. As a first and major finding, interactions of frequency and predictability on ERP amplitudes consistently revealed top-down influences on lexical levels of word processing (Chapters 2 and 4). Second, frequency and predictability mediated relations between N400 amplitudes and fixation durations, pointing to their sensitivity to a common stage of word recognition; further, larger N400 amplitudes entailed longer fixation durations on the next word, a result providing evidence for ongoing processing beyond a fixation (Chapter 3). Third, influences of presentation rate on ERP frequency and predictability effects demonstrated that the time available for word processing critically co-determines the course of bottom-up and top-down influences (Chapter 4). Fourth, at a near-normal reading speed, an early predictability effect suggested the rapid comparison of top-down hypotheses with the actual visual input (Chapter 5). The present results are compatible with interactive models of word recognition assuming that early lexical processes depend on the concerted impact of bottom-up and top-down information. We offered a framework that reconciles the findings on a timeline of word recognition taking into account influences of frequency, predictability, and presentation rate (Chapter 4).
This paper analyses the macroeconomic developments which have taken place in the Bulgarian economy in the period 1993-1997. The paper also looks at the institutional arrangements and the process of economic policy-making in the country. In this context the problems the Bulgarian economy has experienced in the transition process towards a market-oriented economy are also studied. The paper proceeds as follows: Section 2 looks at the institutional arrangements and the process of economic policy-making through 1995. Section 3 studies the deep economic crisis in 1996 and points out what went wrong in that period. Section 4 continues studying the economic crisis of the Bulgarian economy as well as the problems in the transition process during the first half of 1997. Section 5 looks at the economic developments during the second half of 1997 and points to the prospects for growth in 1998. Section 6 deals with the Bulgarian financial institutions and the existing institutional arrangements. Finally, Section 7 concludes the paper.
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
This book deals with the inner life of the capitalist firm. There we find numerous conflicts, the most important of which concerns the individual employment relationship which is understood as a principal-agent problem between the manager, the principal, who issues orders that are to be followed by the employee, the agent. Whereas economic theory traditionally analyses this relationship from a (normative) perspective of the firm in order to support the manager in finding ways to influence the behavior of the employees, such that the latter – ideally – act on behalf of their superior, this book takes a neutral stance. It focusses on explaining individual behavioral patterns and the resulting interactions between the actors in the firm by taking sociological, institutional, and above all, psychological research into consideration. In doing so, insights are gained which challenge many assertions economists take for granted.
While offering significant expressive power, graph transformation systems often come with rather limited capabilities for automated analysis, particularly if systems with many possible initial graphs and large or infinite state spaces are concerned. One approach that tries to overcome these limitations is inductive invariant checking. However, the verification of inductive invariants often requires extensive knowledge about the system in question and faces the approach-inherent challenges of locality and lack of context.
To address that, this report discusses k-inductive invariant checking for graph transformation systems as a generalization of inductive invariants. The additional context acquired by taking multiple (k) steps into account is the key difference to inductive invariant checking and is often enough to establish the desired invariants without requiring the iterative development of additional properties.
To analyze possibly infinite systems in a finite fashion, we introduce a symbolic encoding for transformation traces using a restricted form of nested application conditions. As its central contribution, this report then presents a formal approach and algorithm to verify graph constraints as k-inductive invariants. We prove the approach's correctness and demonstrate its applicability by means of several examples evaluated with a prototypical implementation of our algorithm.
Graph transformation systems are a powerful formal model to capture model transformations or systems with infinite state space, among others. However, this expressive power comes at the cost of rather limited automated analysis capabilities. The general case of unbounded many initial graphs or infinite state spaces is only supported by approaches with rather limited scalability or expressiveness. In this report we improve an existing approach for the automated verification of inductive invariants for graph transformation systems. By employing partial negative application conditions to represent and check many alternative conditions in a more compact manner, we can check examples with rules and constraints of substantially higher complexity. We also substantially extend the expressive power by supporting more complex negative application conditions and provide higher accuracy by employing advanced implication checks. The improvements are evaluated and compared with another applicable tool by considering three case studies.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.
Learning from failure
(2022)
Regression testing is a widespread practice in today's software industry to ensure software product quality. Developers derive a set of test cases, and execute them frequently to ensure that their change did not adversely affect existing functionality. As the software product and its test suite grow, the time to feedback during regression test sessions increases, and impedes programmer productivity: developers wait longer for tests to complete, and delays in fault detection render fault removal increasingly difficult.
Test case prioritization addresses the problem of long feedback loops by reordering test cases, such that test cases of high failure probability run first, and test case failures become actionable early in the testing process. We ask, given test execution schedules reconstructed from publicly available data, to which extent can their fault detection efficiency improved, and which technique yields the most efficient test schedules with respect to APFD?
To this end, we recover regression 6200 test sessions from the build log files of Travis CI, a popular continuous integration service, and gather 62000 accompanying changelists. We evaluate the efficiency of current test schedules, and examine the prioritization results of state-of-the-art lightweight, history-based heuristics. We propose and evaluate a novel set of prioritization algorithms, which connect software changes and test failures in a matrix-like data structure.
Our studies indicate that the optimization potential is substantial, because the existing test plans score only 30% APFD. The predictive power of past test failures proves to be outstanding: simple heuristics, such as repeating tests with failures in recent sessions, result in efficiency scores of 95% APFD. The best-performing matrix-based heuristic achieves a similar score of 92.5% APFD. In contrast to prior approaches, we argue that matrix-based techniques are useful beyond the scope of effective prioritization, and enable a number of use cases involving software maintenance.
We validate our findings from continuous integration processes by extending a continuous testing tool within development environments with means of test prioritization, and pose further research questions. We think that our findings are suited to propel adoption of (continuous) testing practices, and that programmers' toolboxes should contain test prioritization as an existential productivity tool.
Language developers who design domain-specific languages or new language features need a way to make fast changes to language definitions. Those fast changes require immediate feedback. Also, it should be possible to parse the developed languages quickly to handle extensive sets of code.
Parsing expression grammars provides an easy to understand method for language definitions. Packrat parsing is a method to parse grammars of this kind, but this method is unable to handle left-recursion properly. Existing solutions either partially rewrite left-recursive rules and partly forbid them, or use complex extensions to packrat parsing that are hard to understand and cost-intensive. We investigated methods to make parsing as fast as possible, using easy to follow algorithms while not losing the ability to make fast changes to grammars.
We focused our efforts on two approaches.
One is to start from an existing technique for limited left-recursion rewriting and enhance it to work for general left-recursive grammars. The second approach is to design a grammar compilation process to find left-recursion before parsing, and in this way, reduce computational costs wherever possible and generate ready to use parser classes.
Rewriting parsing expression grammars is a task that, if done in a general way, unveils a large number of cases such that any rewriting algorithm surpasses the complexity of other left-recursive parsing algorithms. Lookahead operators introduce this complexity. However, most languages have only little portions that are left-recursive and in virtually all cases, have no indirect or hidden left-recursion. This means that the distinction of left-recursive parts of grammars from components that are non-left-recursive holds great improvement potential for existing parsers.
In this report, we list all the required steps for grammar rewriting to handle left-recursion, including grammar analysis, grammar rewriting itself, and syntax tree restructuring. Also, we describe the implementation of a parsing expression grammar framework in Squeak/Smalltalk and the possible interactions with the already existing parser Ohm/S. We quantitatively benchmarked this framework directing our focus on parsing time and the ability to use it in a live programming context. Compared with Ohm, we achieved massive parsing time improvements while preserving the ability to use our parser it as a live programming tool.
The work is essential because, for one, we outlined the difficulties and complexity that come with grammar rewriting. Also, we removed the existing limitations that came with left-recursion by eliminating them before parsing.
Business processes are instrumental to manage work in organisations. To study the interdependencies between business processes, Business Process Architectures have been introduced. These express trigger and message ow relations between business processes. When we investigate real world Business Process Architectures, we find complex interdependencies, involving multiple process instances. These aspects have not been studied in detail so far, especially concerning correctness properties. In this paper, we propose a modular transformation of BPAs to open nets for the analysis of behavior involving multiple business processes with multiplicities. For this purpose we introduce intermediary nets to portray semantics of multiplicity specifications. We evaluate our approach on a use case from the public sector.
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
Constraints allow developers to specify desired properties of systems in a number of domains, and have those properties be maintained automatically. This results in compact, declarative code, avoiding scattered code to check and imperatively re-satisfy invariants. Despite these advantages, constraint programming is not yet widespread, with standard imperative programming still the norm. There is a long history of research on integrating constraint programming with the imperative paradigm. However, this integration typically does not unify the constructs for encapsulation and abstraction from both paradigms. This impedes re-use of modules, as client code written in one paradigm can only use modules written to support that paradigm. Modules require redundant definitions if they are to be used in both paradigms. We present a language – Babelsberg – that unifies the constructs for en- capsulation and abstraction by using only object-oriented method definitions for both declarative and imperative code. Our prototype – Babelsberg/R – is an extension to Ruby, and continues to support Ruby’s object-oriented se- mantics. It allows programmers to add constraints to existing Ruby programs in incremental steps by placing them on the results of normal object-oriented message sends. It is implemented by modifying a state-of-the-art Ruby virtual machine. The performance of standard object-oriented code without con- straints is only modestly impacted, with typically less than 10% overhead compared with the unmodified virtual machine. Furthermore, our architec- ture for adding multiple constraint solvers allows Babelsberg to deal with constraints in a variety of domains. We argue that our approach provides a useful step toward making con- straint solving a generic tool for object-oriented programmers. We also provide example applications, written in our Ruby-based implementation, which use constraints in a variety of application domains, including interactive graphics, circuit simulations, data streaming with both hard and soft constraints on performance, and configuration file Management.
Babelsberg/RML
(2015)
New programming language designs are often evaluated on concrete implementations. However, in order to draw conclusions about the language design from the evaluation of concrete programming languages, these implementations need to be verified against the formalism of the design. To that end, we also have to ensure that the design actually meets its stated goals. A useful tool for the latter has been to create an executable semantics from a formalism that can execute a test suite of examples. However, this mechanism so far did not allow to verify an implementation against the design.
Babelsberg is a new design for a family of object-constraint languages. Recently, we have developed a formal semantics to clarify some issues in the design of those languages. Supplementing this work, we report here on how this formalism is turned into an executable operational semantics using the RML system. Furthermore, we show how we extended the executable semantics to create a framework that can generate test suites for the concrete Babelsberg implementations that provide traceability from the design to the language. Finally, we discuss how these test suites helped us find and correct mistakes in the Babelsberg implementation for JavaScript.
Between 2002 and 2006 the Colombian government of Álvaro Uribe counted with great international support to hand a demobilization process of right-wing paramilitary groups, along with the implementation of transitional justice policies such as penal prosecutions and the creation of a National Commission for Reparation and Reconciliation (NCRR) to address justice, truth and reparation for victims of paramilitary violence. The demobilization process began when in 2002 the United Self Defence Forces of Colombia (Autodefensas Unidas de Colombia, AUC) agreed to participate in a government-sponsored demobilization process. Paramilitary groups were responsible for the vast majority of human rights violations for a period of over 30 years. The government designed a special legal framework that envisaged great leniency for paramilitaries who committed serious crimes and reparations for victims of paramilitary violence. More than 30,000 paramilitaries have demobilized under this process between January 2003 and August 2006. Law 975, also known as the “Justice and Peace Law”, and Decree 128 have served as the legal framework for the demobilization and prosecutions of paramilitaries. It has offered the prospect of reduced sentences to demobilized paramilitaries who committed crimes against humanity in exchange for full confessions of crimes, restitution for illegally obtained assets, the release of child soldiers, the release of kidnapped victims and has also provided reparations for victims of paramilitary violence. The Colombian demobilization process presents an atypical case of transitional justice. Many observers have even questioned whether Colombia can be considered a case of transitional justice. Transitional justice measures are often taken up after the change of an authoritarian regime or at a post-conflict stage. However, the particularity of the Colombian case is that transitional justice policies were introduced while the conflict still raged. In this sense, the Colombian case expresses one of the key elements to be addressed which is the tension between offering incentives to perpetrators to disarm and demobilize to prevent future crimes and providing an adequate response to the human rights violations perpetrated throughout the course of an internal conflict. In particular, disarmament, demobilization and reintegration processes require a fine balance between the immunity guarantees offered to ex-combatants and the sought of accountability for their crimes. International law provides the legal framework defining the rights to justice, truth and reparations for victims and the corresponding obligations of the State, but the peace negotiations and conflicted political structures do not always allow for the fulfillment of those rights. Thus, the aim of this article is to analyze what kind of transition may be occurring in Colombia by focusing on the role that transitional justice mechanisms may play in political negotiations between the Colombian government and paramilitary groups. In particular, it seeks to address to what extent such processes contribute to or hinder the achievement of the balance between peacebuilding and accountability, and thus facilitate a real transitional process.
Modeling and Formal Analysis of Meta-Ecosystems with Dynamic Structure using Graph Transformation
(2022)
The dynamics of ecosystems is of crucial importance. Various model-based approaches exist to understand and analyze their internal effects. In this paper, we model the space structure dynamics and ecological dynamics of meta-ecosystems using the formal technique of Graph Transformation (short GT). We build GT models to describe how a meta-ecosystem (modeled as a graph) can evolve over time (modeled by GT rules) and to analyze these GT models with respect to qualitative properties such as the existence of structural stabilities. As a case study, we build three GT models describing the space structure dynamics and ecological dynamics of three different savanna meta-ecosystems. The first GT model considers a savanna meta-ecosystem that is limited in space to two ecosystem patches, whereas the other two GT models consider two savanna meta-ecosystems that are unlimited in the number of ecosystem patches and only differ in one GT rule describing how the space structure of the meta-ecosystem grows. In the first two GT models, the space structure dynamics and ecological dynamics of the meta-ecosystem shows two main structural stabilities: the first one based on grassland-savanna-woodland transitions and the second one based on grassland-desert transitions. The transition between these two structural stabilities is driven by high-intensity fires affecting the tree components. In the third GT model, the GT rule for savanna regeneration induces desertification and therefore a collapse of the meta-ecosystem. We believe that GT models provide a complementary avenue to that of existing approaches to rigorously study ecological phenomena.
This book features four essays that illuminate the relationship between American and Soviet film cultures in the 20th century.
The first essay emphasizes the structural similarities and dissimilarities of the two cultures. Both wanted to reach the masses. However, the goal in Hollywood was to entertain (and educate a little) and in Moscow to educate (and entertain a little).
Some films in the Soviet Union as well as in the United States were conceived as clear competition to one another – as the second essay demonstrates – and the ideological opponent was not shown from its most advantageous side.
The third essay shows how, in the 1980s, the different film cultures made it difficult for the Soviet director Andrei Konchalovsky to establish himself in the US, but nevertheless allowed him to succeed.
In the 1960s, a genre became popular that tells the story of the Russian Civil War using stylistic features of the Western: The Eastern. Its rise and decline are analyzed in the fourth essay.
This study is analysing the transformation of Slovak administration in the telecommunication sector between 1989 and 2004. The dynamic telecom sector forms a good example for the transition problems of post-socialist administration with special regard to the regulation regime change. After describing briefly the role of the telecom sector within economy, the Slovak sectoral policy is analysed. The focus is layed on telecom legislation (including the regulation framework), liberalization of the telecom market and privatisation of the former state owned telecom operator. The transformation of the organizational structure of the "Slovak telecommunication administration" is analysed in particular at the level of the ministry and the regulating agency.
Agricultural policy in the transition states of Central Eastern Europe is a very complex issue – ranging from privatisation of farm land, the establishment of agricultural markets to detailed questions of veterinary care, plant health and animal nutrition. Its main elements are the introduction of market liberalization, farm restructuring, privatisation, the reform of the sector and the creation of supporting market institutions and services.1 In this process central state agriculture administration plays a decisive role. This paper is summing up the research of the author on Slovak agricultural administration between 2002 and 2004. This work was part of a DFG-funded research project on “Genesis, Organization and Efficiency of the central-state Ministerial Administration in Central and Eastern Europe”. The project was analysing the processes, results and efficiency of administrative structures at central-state level in Estonia, Poland and Slovakia with reference to public administration in the policy fields of agriculture and telecommunications. The paper is reflecting the situation in the sector and its administration at the beginning of 2004. At first, an overview of the role of the agricultural sector in Slovak economy in the past and presence is provided (section I). Against this background, the development of the agricultural policy in the different periods since 1989 will be analysed, mainly what privatisation, accession to the EU and subsidy policy are concerned (section II). A detailed study of the developments in agricultural administration forms the next part of the paper (section III), i.e. the changes taking place in the ministry of agriculture and in the other institutions responsible for the implementation of agricultural policy. The role of interest groups in agriculture is briefly analysed (section IV). In the conclusions two different scenarios on the further development of Slovak agricultural administration will be deployed.
Pictures are a medium that helps make the past tangible and preserve memories. Without context, they are not able to do so. Pictures are brought to life by their associated stories. However, the older pictures become, the fewer contemporary witnesses can tell these stories.
Especially for large, analog picture archives, knowledge and memories are spread over many people. This creates several challenges: First, the pictures must be digitized to save them from decaying and make them available to the public. Since a simple listing of all the pictures is confusing, the pictures should be structured accessibly. Second, known information that makes the stories vivid needs to be added to the pictures. Users should get the opportunity to contribute their knowledge and memories. To make this usable for all interested parties, even for older, less technophile generations, the interface should be intuitive and error-tolerant.
The resulting requirements are not covered in their entirety by any existing software solution without losing the intuitive interface or the scalability of the system.
Therefore, we have developed our digital picture archive within the scope of a bachelor project in cooperation with the Bad Harzburg-Stiftung. For the implementation of this web application, we use the UI framework React in the frontend, which communicates via a GraphQL interface with the Content Management System Strapi in the backend. The use of this system enables our project partner to create an efficient process from scanning analog pictures to presenting them to visitors in an organized and annotated way. To customize the solution for both picture delivery and information contribution for our target group, we designed prototypes and evaluated them with people from Bad Harzburg. This helped us gain valuable insights into our system’s usability and future challenges as well as requirements.
Our web application is already being used daily by our project partner. During the project, we still came up with numerous ideas for additional features to further support the exchange of knowledge.
The concepts of food deficit, hunger, undernourishment and food security are discussed. Axioms and indices for the assessment of nutrition of individuals and groups are suggested. Furthermore a measure for food aid donor performance is developed and applied to a sample of bilateral and multilateral donors providing food aid for African countries.
This technical report presents the results of student projects which were prepared during the lecture “Operating Systems II” offered by the “Operating Systems and Middleware” group at HPI in the Summer term of 2020. The lecture covered ad- vanced aspects of operating system implementation and architecture on topics such as Virtualization, File Systems and Input/Output Systems. In addition to attending the lecture, the participating students were encouraged to gather practical experience by completing a project on a closely related topic over the course of the semester. The results of 10 selected exceptional projects are covered in this report.
The students have completed hands-on projects on the topics of Operating System Design Concepts and Implementation, Hardware/Software Co-Design, Reverse Engineering, Quantum Computing, Static Source-Code Analysis, Operating Systems History, Application Binary Formats and more. It should be recognized that over the course of the semester all of these projects have achieved outstanding results which went far beyond the scope and the expec- tations of the lecture, and we would like to thank all participating students for their commitment and their effort in completing their respective projects, as well as their work on compiling this report.
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
These days design thinking is no longer a “new approach”. Among practitioners, as well as academics, interest in the topic has gathered pace over the last two decades. However, opinions are divided over the longevity of the phenomenon: whether design thinking is merely “old wine in new bottles,” a passing trend, or still evolving as it is being spread to an increasing number of organizations and industries. Despite its growing relevance and the diffusion of design thinking, knowledge on the actual status quo in organizations remains scarce. With a new study, the research team of Prof. Uebernickel and Stefanie Gerken investigates temporal developments and changes in design thinking practices in organizations over the past six years comparing the results of the 2015 “Parts without a whole” study with current practices and future developments. Companies of all sizes and from different parts of the world participated in the survey. The findings from qualitative interviews with experts, i.e., people who have years of knowledge with design thinking, were cross-checked with the results from an exploratory analysis of the survey data. This analysis uncovers significant variances and similarities in how design thinking is interpreted and applied in businesses.
The service-oriented architecture supports the dynamic assembly and runtime reconfiguration of complex open IT landscapes by means of runtime binding of service contracts, launching of new components and termination of outdated ones. Furthermore, the evolution of these IT landscapes is not restricted to exchanging components with other ones using the same service contracts, as new services contracts can be added as well. However, current approaches for modeling and verification of service-oriented architectures do not support these important capabilities to their full extend.In this report we present an extension of the current OMG proposal for service modeling with UML - SoaML - which overcomes these limitations. It permits modeling services and their service contracts at different levels of abstraction, provides a formal semantics for all modeling concepts, and enables verifying critical properties. Our compositional and incremental verification approach allows for complex properties including communication parameters and time and covers besides the dynamic binding of service contracts and the replacement of components also the evolution of the systems by means of new service contracts. The modeling as well as verification capabilities of the presented approach are demonstrated by means of a supply chain example and the verification results of a first prototype are shown.
Model-driven software development requires techniques to consistently propagate modifications between different related models to realize its full potential. For large-scale models, efficiency is essential in this respect. In this paper, we present an improved model synchronization algorithm based on triple graph grammars that is highly efficient and, therefore, can also synchronize large-scale models sufficiently fast. We can show, that the overall algorithm has optimal complexity if it is dominating the rule matching and further present extensive measurements that show the efficiency of the presented model transformation and synchronization technique.
The correctness of model transformations is a crucial element for the model-driven engineering of high quality software. A prerequisite to verify model transformations at the level of the model transformation specification is that an unambiguous formal semantics exists and that the employed implementation of the model transformation language adheres to this semantics. However, for existing relational model transformation approaches it is usually not really clear under which constraints particular implementations are really conform to the formal semantics. In this paper, we will bridge this gap for the formal semantics of triple graph grammars (TGG) and an existing efficient implementation. Whereas the formal semantics assumes backtracking and ignores non-determinism, practical implementations do not support backtracking, require rule sets that ensure determinism, and include further optimizations. Therefore, we capture how the considered TGG implementation realizes the transformation by means of operational rules, define required criteria and show conformance to the formal semantics if these criteria are fulfilled. We further outline how static analysis can be employed to guarantee these criteria.
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.
Various kinds of typed attributed graphs are used to represent states of systems from a broad range of domains. For dynamic systems, established formalisms such as graph transformations provide a formal model for defining state sequences. We consider the extended case where time elapses between states and introduce a logic to reason about these sequences. With this logic we express properties on the structure and attributes of states as well as on the temporal occurrence of states that are related by their inner structure, which no formal logic over graphs accomplishes concisely so far. Firstly, we introduce graphs with history by equipping every graph element with the timestamp of its creation and, if applicable, its deletion. Secondly, we define a logic on graphs by integrating the temporal operator until into the well-established logic of nested graph conditions. Thirdly, we prove that our logic is equally expressive to nested graph conditions by providing a suitable reduction. Finally, the implementation of this reduction allows for the tool-based analysis of metric temporal properties for state sequences.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair.
We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
The study presents estimates and analyses of the social expenditure in Poland. Changes which occurred during the transformation period are a reflection of consciously launched political transformations as well as decisions taken as a result of current needs and political pressures. This has an impact on the volume and structure of expenditures which are under consolidation. The debate devoted to budget issues, which gets more intense every autumn, testifies to increasing problems with correcting guidelines for distribution of expenditures. Even slight changes stand for depriving a specified group of transfers, what in democratic conditions produces strong protests. A similar negative attitude to changes became evident with regard to taxation. Recommendations presented in 1998 by the Polish government [see Ministry of Finance, 1998a, 1998b] introduce substantial modifications to the current tax system (withdrawal from tax exemptions and introduction of a tax-free minimum income) and thus met with a massive reluctance of major political fractions. This study provides readers with information on the volume of public expenditures, the source of public revenue, that is taxes, and a thorough study on expenditures allocated to social goals. The analysis was carried out on the basis of own estimates, which employ data acquired from the Ministry of Finance and the Ministry of Labour and Social Policy.
This is the 13th issue of the working paper series Interdisciplinary Studies on Information Structure (ISIS) of the Sonderforschungsbereich (SFB) 632. It is the first part of a series of Linguistic Fieldnote issues which present data collected by members of different projects of the SFB during fieldwork on various languages or dialects spoken worldwide. This part of the Fieldnote Series is dedicated to data from African languages. It contains contributions by Mira Grubic (A5) on Ngizim, and Susanne Genzel & Frank Kügler (D5) on Akan. The papers allow insights into various aspects of the elicitation of formal correlates of focus and related phenomena in different African languages investigated by the SFB in the second funding phase, especially in the period between 2007 and 2010.
The Apache Modeling Project
(2004)
This document presents an introduction to the Apache HTTP Server, covering both an overview and implementation details. It presents results of the Apache Modelling Project done by research assistants and students of the Hasso–Plattner–Institute in 2001, 2002 and 2003. The Apache HTTP Server was used to introduce students to the application of the modeling technique FMC, a method that supports transporting knowledge about complex systems in the domain of information processing (software and hardware as well). After an introduction to HTTP servers in general, we will focus on protocols and web technology. Then we will discuss Apache, its operational environment and its extension capabilities— the module API. Finally we will guide the reader through parts of the Apache source code and explain the most important pieces.
This dissertation contains theoretical investigations on the morphology and statistical mechanics of vesicles. The shapes of homogeneous fluid vesicles and inhomogeneous vesicles with fluid and solid membrane domains are calculated. The influence of thermal fluctuations is investigated. The obtained results are valid on mesoscopic length scales and are based on a geometrical membrane model, where the vesicle membrane is described as either a static or a thermal fluctuating surface. The thesis consists of three parts. In the first part, homogeneous vesicles are considered. The focus in this part is on the thermally induced morphological transition between vesicles with prolate and oblate shape. With the help of Monte Carlo simulations, the free energy profile of these vesicles is determined. It can be shown that the shape transformation between prolate and oblate vesicles proceeds continuously and is not hampered by a free energy barrier. The second and third part deal with inhomogeneous vesicles which contain intramembrane domains. These investigations are motivated by experimental results on domain formation in single or multicomponent vesicles, where phase separation occurs and different membrane phases coexist. The resulting domains differ with regard to their membrane structure (solid, fluid). The membrane structure has a distinct effect on the form of the domain and the morphology of the vesicle. In the second part, vesicles with coexisting solid and fluid membrane domains are studied, while the third part addresses vesicles with coexisting fluid domains. The equilibrium morphology of vesicles with simple and complex domain forms, derived through minimisation of the membrane energy, is determined as a function of material parameters. The results are summarised in morphology diagrams. These diagrams show previously unknown morphological transitions between vesicles with different domain shapes. The impact of thermal fluctuations on the vesicle and the form of the domains is investigated by means of Monte Carlo simulations.
The fourth volume of the DIGAREC Series holds the proceedings to the conference “Logic and Structure of the Computer Game”, held at the House of Brandenburg- Prussian History in Potsdam on November 6 and 7, 2009. The conference was the first to explicitly address the medial logic and structure of the computer game. The contributions focus on the specific potential for mediation and on the unique form of mediation inherent in digital games. This includes existent, yet scattered approaches to develop a unique curriculum of game studies. In line with the concept of ‘mediality’, the notions of aesthetics, interactivity, software architecture, interface design, iconicity, spatiality, and rules are of special interest. Presentations were given by invited German scholars and were commented on by international respondents in a dialogical structure.
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
Contents: 1. Introduction 2. Migration and Assimilation – Theoretical Approaches 2.1 Meaning and Definition of the Terms Migration and Migrant 2.2 Milton M. Gordon – Sub Processes of Assimilation 2.3 Hartmut Esser - Acculturation, Integration, and Assimilation 2.4 The Concept of Integration and Assimilation 2.5 Straight–line Assimilation and its Implications 2.6 Segmented Assimilation and its Implications 3. Social Inequality and Welfare – Theoretical Approaches 3.1 Dimensions of Inequality 3.2 Welfare Regimes and Social Inequality 3.3 Migration, Assimilation and Inequality 4. Research Design 4.1 Research Question and General Proceeding 4.2 Sample and Data Base 4.3 Operationalisation and Indicators 5. Migration, Welfare and Inequality in Three European Countries 6. Empirical Results 6.1 Performance of Migrants Compared With Natives 6.2 Different Trajectories of Assimilation 6.3 Trajectories of Segmented Assimilation and their Determinants 6.4 Policies, Attitudes and Assimilation – An Aggregate Analysis 6.5 Summary – What Determines the Performance of Migrants? 7. Discussion of Empirical Results in Terms of Theoretical Approaches 7.1 The Situation of Migrants in Three European Countries 7.2 Assessment of the Trajectories of Assimilation 8. Conclusion – Future Prospects of Migration in Europe
CSOM/PL is a software product line (SPL) derived from applying multi-dimensional separation of concerns (MDSOC) techniques to the domain of high-level language virtual machine (VM) implementations. For CSOM/PL, we modularised CSOM, a Smalltalk VM implemented in C, using VMADL (virtual machine architecture description language). Several features of the original CSOM were encapsulated in VMADL modules and composed in various combinations. In an evaluation of our approach, we show that applying MDSOC and SPL principles to a domain as complex as that of VMs is not only feasible but beneficial, as it improves understandability, maintainability, and configurability of VM implementations without harming performance.
MDE techniques are more and more used in praxis. However, there is currently a lack of detailed reports about how different MDE techniques are integrated into the development and combined with each other. To learn more about such MDE settings, we performed a descriptive and exploratory field study with SAP, which is a worldwide operating company with around 50.000 employees and builds enterprise software applications. This technical report describes insights we got during this study. For example, we identified that MDE settings are subject to evolution. Finally, this report outlines directions for future research to provide practical advises for the application of MDE settings.
Duplicate detection consists in determining different representations of real-world objects in a database. Recent research has considered the use of relationships among object representations to improve duplicate detection. In the general case where relationships form a graph, research has mainly focused on duplicate detection quality/effectiveness. Scalability has been neglected so far, even though it is crucial for large real-world duplicate detection tasks. In this paper we scale up duplicate detection in graph data (DDG) to large amounts of data and pairwise comparisons, using the support of a relational database system. To this end, we first generalize the process of DDG. We then present how to scale algorithms for DDG in space (amount of data processed with limited main memory) and in time. Finally, we explore how complex similarity computation can be performed efficiently. Experiments on data an order of magnitude larger than data considered so far in DDG clearly show that our methods scale to large amounts of data not residing in main memory.
Contents: 1. Capitalist societies as market-bargaining societies on the basis of resources of action: The idealtypical bargain between capital and labour; an alternative to Marx' theory of exploitation - Discussion of the model 2. A general typology of paths of societies in history and a characterisation of state socialism - People's capitalisms as perspective of development - What remains from Marx' ideas? 3. Variations of welfare capitalism after the decline of state socialism 3.1 National differences of welfare capitalism 3.2 Overall inequality of income and overall class consciousness 3.3 Explaining income inequality and variation in class consciousness by class and gender 3.3.1 A test of different class models in the FRG 3.3.2 Developing an international model of gendered occupational and employment status as bundles of resources of action 4. Summary
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
On the 20.01.1991 the Latvian people defended the Latvian political elite from the Soviet OMON troops in order to achieve independence. After this impressive sign of civil society the people fell asleep, the level of mobility and the satisfaction with the functioning of democracy therefore is rather weak. The referendum (2008), to gain the right to dissolve the Parliament by the people, initiated by the Trade Unions can be assessed as a sign that there is something on the move. This paper is trying to give an impression of the situation of the civil society in terms of participation in the decision- making process. Hereby the focus lays on NGOs: What is the legal base and which problems do they face. To learn more about the situation interviews were organized with representatives of NGOs from different sectors like community development; Social inclusion; advocating gender issues as well as environment and sustainable development. As a result of the research it can be said that the civil society made some steps forward but it is still struggling with a high level of corruption, lack of interested from the elite and the ordinary people and the insecure financial state.
Peter Huckauf
(2014)
Die Gedichte und Texte Peter Huckaufs in dem vorliegenden Band 11 der „Potsdamer Beiträge zur Sorabistik” sind eine Hommage an die Lausitz, an ihre Menschen und Landschaften. In Wortspielen, visueller Poesie und Prosatexten samt Fotographien und Einblicken in seine Lebensgeschichte stellt sich der Autor als Liebhaber dieses Landstrichs vor. Er lässt den Leser teilhaben an einer Entdeckungsreise durch die verloren gegangene Heimat der Kindheit, die er sich als Rückkehrer neu erschließt. Seine Reminiszenzen sind nachhaltig sowohl für den deutschen als auch sorbischen/wendischen Rezipienten, was Peter Huckauf zu einem für die Lausitz und darüber hinaus interessanten Schriftsteller und Künstler macht. Den sprachlichen Aspekt beachtend, wurden ausgewählte Gedichte der Sammlung in die obersorbische und vor allem in die niedersorbische Sprache übertragen.
“The UN Peacebuilding Commission – Lessons from Sierra Leone” by political scientist Andrea Iro is an assessment of the United Nations Peacebuilding Commission (PBC) and the United Nations Peacebuilding Fund (PBF) by analysing their performance over the last two years in Sierra Leone, one of the first PBC focus countries. The paper explores the key question of how the PBC/PBF’s mandate has been translated into operational practice in the field. It concludes that though the overall impact has been mainly positive and welcomed by the country, translating the general mandate into concrete activities remains a real challenge at the country level.
This thesis discusses theoretical and practical aspects of modelling of light propagation in non-aged and aged step-index polymer optical fibres (POFs). Special attention has been paid in describing optical characteristics of non-ideal fibres, scattering and attenuation, and in combining application-oriented and theoretical approaches. The precedence has been given to practical issues, but much effort has been also spent on the theoretical analysis of basic mechanisms governing light propagation in cylindrical waveguides.As a result a practically usable general POF model based on the raytracing approach has been developed and implemented. A systematic numerical optimisation of its parameters has been performed to obtain the best fit between simulated and measured optical characteristics of numerous non-aged and aged fibre samples. The model was verified by providing good agreement, especially for the non-aged fibres. The relations found between aging time and optimal values of model parameters contribute to a better understanding of the aging mechanisms of POFs.
Switches between political and administrative positions seem to be quite common in today’s politics, or at least not so unusual any longer. Nevertheless, up-to-date empirical studies on this issue are lacking. This paper investigates the presumption, that in recent years top bureaucrats have become more politicised, while at the same time more politicians stem from a bureaucratic background, by looking at the career paths of both. For this purpose, we present new empirical evidence on career patterns of top bureaucrats and executive politicians both at Federal and at Länder level. The data was collected from authorized biographies published at the websites of the Federal and Länder ministries for all Ministers, Parliamentary State Secretaries and Administrative State Secretaries who held office in June 2009.
West of Potsdam’s city center lies the Golm Campus, the largest campus of the University of Potsdam. Its different buildings tell of the numerous institutions that were established at this site over the years: From the mid-1930s, the Walther Wever Barracks were located here. From 1943, it housed the Air Intelligence Division of the German Airforce Supreme Commander. In 1951, a training institution of the Ministry of State Security moved in, which existed until 1989 under different names. In July 1991, the newly founded University of Potsdam took over the premises, which are now part of the Potsdam-Golm Science Park.
The book takes you on a historic journey of the place and invites you to take a walk across today’s campus. The book includes over 110 photos and a detailed map.
Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
On the occasion of this symposium we called for submissions of research papers and practitioner's reports. A compilation of the research papers realized during the fourth HPI cloud symposium "Operating the Cloud" 2016 are published in this proceedings. We thank the authors for exciting presentations and insights into their current work and research.
Moreover, we look forward to more interesting submissions for the upcoming symposium later in the year. Every year, the Hasso Plattner Institute (HPI) invites guests from industry and academia to a collaborative scientific workshop on the topic "Operating the Cloud". Our goal is to provide a forum for the exchange of knowledge and experience between industry and academia. Co-located with the event is the HPI's Future SOC Lab day, which offers an additional attractive and conducive environment for scientific and industry related discussions. "Operating the Cloud" aims to be a platform for productive interactions of innovative ideas, visions, and upcoming technologies in the field of cloud operation and administration.
Version Control Systems (VCS) allow developers to manage changes to software artifacts. Developers interact with VCSs through a variety of client programs, such as graphical front-ends or command line tools. It is desirable to use the same version control client program against different VCSs. Unfortunately, no established abstraction over VCS concepts exists. Instead, VCS client programs implement ad-hoc solutions to support interaction with multiple VCSs. This thesis presents Pur, an abstraction over version control concepts that allows building rich client programs that can interact with multiple VCSs. We provide an implementation of this abstraction and validate it by implementing a client application.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
One of the key challenges in service-oriented systems engineering is the prediction and assurance of non-functional properties, such as the reliability and the availability of composite interorganizational services. Such systems are often characterized by a variety of inherent uncertainties, which must be addressed in the modeling and the analysis approach. The different relevant types of uncertainties can be categorized into (1) epistemic uncertainties due to incomplete knowledge and (2) randomization as explicitly used in protocols or as a result of physical processes. In this report, we study a probabilistic timed model which allows us to quantitatively reason about nonfunctional properties for a restricted class of service-oriented real-time systems using formal methods. To properly motivate the choice for the used approach, we devise a requirements catalogue for the modeling and the analysis of probabilistic real-time systems with uncertainties and provide evidence that the uncertainties of type (1) and (2) in the targeted systems have a major impact on the used models and require distinguished analysis approaches. The formal model we use in this report are Interval Probabilistic Timed Automata (IPTA). Based on the outlined requirements, we give evidence that this model provides both enough expressiveness for a realistic and modular specifiation of the targeted class of systems, and suitable formal methods for analyzing properties, such as safety and reliability properties in a quantitative manner. As technical means for the quantitative analysis, we build on probabilistic model checking, specifically on probabilistic time-bounded reachability analysis and computation of expected reachability rewards and costs. To carry out the quantitative analysis using probabilistic model checking, we developed an extension of the Prism tool for modeling and analyzing IPTA. Our extension of Prism introduces a means for modeling probabilistic uncertainty in the form of probability intervals, as required for IPTA. For analyzing IPTA, our Prism extension moreover adds support for probabilistic reachability checking and computation of expected rewards and costs. We discuss the performance of our extended version of Prism and compare the interval-based IPTA approach to models with fixed probabilities.
The present lecture notes aim for an introduction to the ergodic behaviour of Markov Processes and addresses graduate students, post-graduate students and interested readers.
Different tools and methods for the study of upper bounds on uniform and weak ergodic rates of Markov Processes are introduced. These techniques are then applied to study limit theorems for functionals of Markov processes.
This lecture course originates in two mini courses held at University of Potsdam, Technical University of Berlin and Humboldt University in spring 2013 and Ritsumameikan University in summer 2013.
Alexei Kulik, Doctor of Sciences, is a Leading researcher at the Institute of Mathematics of Ukrainian National Academy of Sciences.
This paper studies the persistence of daily returns of 21 German stocks from 1960 to 2008. We apply a widely used test based upon the modified R/S-Method by Lo [1991]. As an extension to Lux [1996] and Carbone et al. [2004] and in analogy to moving average or moving volatility, the statistics is calculated for moving windows of length 4, 8, and 16 years for every time series. Periods of persistence or long memory in returns can be found in some but not all time series. Robustness of results is verified by investigating stationarity and short memory effects.
In centrally planned economies state subsidies were the main instrument of supporting the economic sector. Most of them had also social functions (e.g. through subsidising the consumption of households). In the period of transition, with the withdraw all of the state from economic decisions of the enterprises, new social problems appeared. The paper analyses the process of granting state support to economic units - its scope and forms - in the 90-ties.
This research is about local actors' response to problems of uneven development and unemployment. Policies to combat these problems are usually connected to socio-economic regeneration in England and economic and employment promotion (Wirtschafts- und Beschäftigungsförderung) in Germany. The main result of this project is a description of those factors which support the emergence of local socio-economic initiatives aimed at job creation. Eight social and formal economy initiatives have been examined and the ways in which their emergence has been influenced by institutional factors has been analysed. The role of local actors and forms of governance as well as wider regional and national policy frameworks has been taken into account. Socio-economic initiatives have been defined as non-routine local projects or schemes with the objective of direct job creation. Such initiatives often focus on specific local assets for the formal or the social economy. Socio-economic initiatives are grounded on ideas of local economic development, and the creation of local jobs for local people. The adopted understanding of governance focuses on the processes of decision taking. Thus, this understanding of governance is broadly construed to include the ways in which actors in addition to traditional government manage urban development. The applied understanding of governance lays a focus on 'strategic' forms of decision taking about both long term objectives and short term action linked to socio-economic regeneration. Four old industrial towns in North England and East Germany have been selected for case studies due to their particular socio-economic background. These towns, with between 10.000 and 70.000 inhabitants, are located outside of the main agglomerations and bear central functions for their hinterland. The approach has been comparative, with a focus on examining common themes rather than gaining in-depth knowledge of a single case. Until now, most urban governance studies have analysed the impacts of particular forms of governance such as regeneration partnerships. This project looks at particular initiatives and poses the question to what extent their emergence can be understood as a result of particular forms of governance, local institutional factors or regional and national contexts.
Roughly every third Wikipedia article contains an infobox - a table that displays important facts about the subject in attribute-value form. The schema of an infobox, i.e., the attributes that can be expressed for a concept, is defined by an infobox template. Often, authors do not specify all template attributes, resulting in incomplete infoboxes. With iPopulator, we introduce a system that automatically populates infoboxes of Wikipedia articles by extracting attribute values from the article's text. In contrast to prior work, iPopulator detects and exploits the structure of attribute values for independently extracting value parts. We have tested iPopulator on the entire set of infobox templates and provide a detailed analysis of its effectiveness. For instance, we achieve an average extraction precision of 91% for 1,727 distinct infobox template attributes.
1 Introduction 1.1 Project formulation 1.2 Our contribution 2 Pedagogical Aspect 4 2.1 Modern teaching 2.2 Our Contribution 2.2.1 Autonomous and exploratory learning 2.2.2 Human machine interaction 2.2.3 Short multimedia clips 3 Ontology Aspect 3.1 Ontology driven expert systems 3.2 Our contribution 3.2.1 Ontology language 3.2.2 Concept Taxonomy 3.2.3 Knowledge base annotation 3.2.4 Description Logics 4 Natural language approach 4.1 Natural language processing in computer science 4.2 Our contribution 4.2.1 Explored strategies 4.2.2 Word equivalence 4.2.3 Semantic interpretation 4.2.4 Various problems 5 Information Retrieval Aspect 5.1 Modern information retrieval 5.2 Our contribution 5.2.1 Semantic query generation 5.2.2 Semantic relatedness 6 Implementation 6.1 Prototypes 6.2 Semantic layer architecture 6.3 Development 7 Experiments 7.1 Description of the experiments 7.2 General characteristics of the three sessions, instructions and procedure 7.3 First Session 7.4 Second Session 7.5 Third Session 7.6 Discussion and conclusion 8 Conclusion and future work 8.1 Conclusion 8.2 Open questions A Description Logics B Probabilistic context-free grammars
Fiscal federalism has been an important topic among public finance theorists in the last four decades. There is a series of arguments that decentralization of governments enhances growth by improving allocation efficiency. However, the empirical studies have shown mixed results for industrialized and developing countries and some of them have demonstrated that there might be a threshold level of economic development below which decentralization is not effective. Developing and transition countries have developed a variety of forms of fiscal decentralization as a possible strategy to achieve effective and efficient governmental structures. A generalized principle of decentralization due to the country specific circumstances does not exist. Therefore, decentralization has taken place in different forms in various countries at different times, and even exactly the same extent of decentralization may have had different impacts under different conditions. The purpose of this study is to investigate the current state of the fiscal decentralization in Mongolia and to develop policy recommendations for the efficient and effective intergovernmental fiscal relations system for Mongolia. Within this perspective the analysis concentrates on the scope and structure of the public sector, the expenditure and revenue assignment as well as on the design of the intergovernmental transfer and sub-national borrowing. The study is based on data for twenty-one provinces and the capital city of Mongolia for the period from 2000 to 2009. As a former socialist country Mongolia has had a highly centralized governmental sector. The result of the analysis below revealed that the Mongolia has introduced a number of decentralization measures, which followed a top down approach and were slowly implemented without any integrated decentralization strategy in the last decade. As a result Mongolia became de-concentrated state with fiscal centralization. The revenue assignment is lacking a very important element, for instance significant revenue autonomy given to sub-national governments, which is vital for the efficient service delivery at the local level. According to the current assignments of the expenditure and revenue responsibilities most of the provinces are unable to provide a certain national standard of public goods supply. Hence, intergovernmental transfers from the central jurisdiction to the sub-national jurisdictions play an important role for the equalization of the vertical and horizontal imbalances in Mongolia. The critical problem associated with intergovernmental transfers is that there is not a stable, predictable and transparent system of transfer allocation. The amount of transfers to sub-national governments is determined largely by political decisions on ad hoc basis and disregards local differences in needs and fiscal capacity. Thus a fiscal equalization system based on the fiscal needs of the provinces should be implemented. The equalization transfers will at least partly offset the regional disparities in revenues and enable the sub-national governments to provide a national minimum standard of local public goods.
In current practice, business processes modeling is done by trained method experts. Domain experts are interviewed to elicit their process information but not involved in modeling. We created a haptic toolkit for process modeling that can be used in process elicitation sessions with domain experts. We hypothesize that this leads to more effective process elicitation. This paper brakes down "effective elicitation" to 14 operationalized hypotheses. They are assessed in a controlled experiment using questionnaires, process model feedback tests and video analysis. The experiment compares our approach to structured interviews in a repeated measurement design. We executed the experiment with 17 student clerks from a trade school. They represent potential users of the tool. Six out of fourteen hypotheses showed significant difference due to the method applied. Subjects reported more fun and more insights into process modeling with tangible media. Video analysis showed significantly more reviews and corrections applied during process elicitation. Moreover, people take more time to talk and think about their processes. We conclude that tangible media creates a different working mode for people in process elicitation with fun, new insights and instant feedback on preliminary results.
Today, software has become an intrinsic part of complex distributed embedded real-time systems. The next generation of embedded real-time systems will interconnect the today unconnected systems via complex software parts and the service-oriented paradigm. Therefore besides timed behavior and probabilistic behaviour also structure dynamics, where the architecture can be subject to changes at run-time, e.g. when dynamic binding of service end-points is employed or complex collaborations are established dynamically, is required. However, a modeling and analysis approach that combines all these necessary aspects does not exist so far.
To fill the identified gap, we propose Probabilistic Timed Graph Transformation Systems (PTGTSs) as a high-level description language that supports all the necessary aspects of structure dynamics, timed behavior, and probabilistic behavior. We introduce the formal model of PTGTSs in this paper and present a mapping of models with finite state spaces to probabilistic timed automata (PTA) that allows to use the PRISM model checker to analyze PTGTS models with respect to PTCTL properties.
The analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of probabilistic timed graph transformation systems is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces.
We present an approach for the analysis of large scale systems modeled as probabilistic timed graph transformation systems by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large scale system under analysis. We consider a running example in which we model shuttles driving on tracks of a large scale topology and for which we verify that shuttles never collide and are unlikely to execute emergency brakes. In our evaluation, we apply an implementation of our approach to the running example.
The formal modeling and analysis is of crucial importance for software development processes following the model based approach. We present the formalism of Interval Probabilistic Timed Graph Transformation Systems (IPTGTSs) as a high-level modeling language. This language supports structure dynamics (based on graph transformation), timed behavior (based on clocks, guards, resets, and invariants as in Timed Automata (TA)), and interval probabilistic behavior (based on Discrete Interval Probability Distributions). That is, for the probabilistic behavior, the modeler using IPTGTSs does not need to provide precise probabilities, which are often impossible to obtain, but rather provides a probability range instead from which a precise probability is chosen nondeterministically. In fact, this feature on capturing probabilistic behavior distinguishes IPTGTSs from Probabilistic Timed Graph Transformation Systems (PTGTSs) presented earlier.
Following earlier work on Interval Probabilistic Timed Automata (IPTA) and PTGTSs, we also provide an analysis tool chain for IPTGTSs based on inter-formalism transformations. In particular, we provide in our tool AutoGraph a translation of IPTGTSs to IPTA and rely on a mapping of IPTA to Probabilistic Timed Automata (PTA) to allow for the usage of the Prism model checker. The tool Prism can then be used to analyze the resulting PTA w.r.t. probabilistic real-time queries asking for worst-case and best-case probabilities to reach a certain set of target states in a given amount of time.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.