Refine
Year of publication
Document Type
- Article (17)
- Monograph/Edited Volume (16)
- Other (10)
- Conference Proceeding (2)
- Part of a Book (1)
Is part of the Bibliography
- yes (46)
Keywords
- Cloud Computing (4)
- E-Learning (3)
- Identitätsmanagement (3)
- MOOCs (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- identity management (3)
- openHPI (3)
- tele-TASK (3)
- Online Course (2)
- Online-Learning (2)
- Privacy (2)
- Security (2)
- Studie (2)
- cloud (2)
- cloud computing (2)
- digital identity (2)
- e-learning (2)
- self-sovereign identity (2)
- ACINQ (1)
- ASIC (1)
- Angriffe (1)
- Anomaly detection (1)
- Anwendungsvirtualisierung (1)
- Attention span (1)
- Attribute aggregation (1)
- Australian securities exchange (1)
- Authentication (1)
- Authentifizierung (1)
- BCCC (1)
- BTC (1)
- Basic Storage Anbieter (1)
- Biometrie (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockchains (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Byzantine Agreement (1)
- Change Management (1)
- Cloud (1)
- Cloud Native Applications (1)
- Cloud Storage Broker (1)
- Cloud access control and resource management (1)
- Cloud-Security (1)
- Colored Coins (1)
- Correlation (1)
- Crowd-Resourcing (1)
- DAO (1)
- DPoS (1)
- Data models (1)
- Datenschutz (1)
- Deep learning (1)
- Delegated Proof-of-Stake (1)
- Design Thinking (1)
- Distributed Proof-of-Research (1)
- E-Wallet (1)
- ECDSA (1)
- Electronic prescription (1)
- Energy-aware (1)
- Eris (1)
- Ether (1)
- Ethereum (1)
- Event normalization (1)
- Event processing (1)
- FIDO (1)
- Federated Byzantine Agreement (1)
- FollowMyVote (1)
- Fork (1)
- Forschungskolleg (1)
- Forschungsprojekte (1)
- Future SOC Lab (1)
- Gamification (1)
- Gridcoin (1)
- HITS (1)
- HMM (1)
- HPI Forschung (1)
- HPI research (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasso Plattner Institute (1)
- Hasso-Plattner-Institut (1)
- IDS (1)
- IDS management (1)
- IT-Infrastruktur (1)
- IT-infrastructure (1)
- Identity Management (1)
- Identity management systems (1)
- Identität (1)
- In-Memory Technologie (1)
- In-memory (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Integrity Verification (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Interviews (1)
- Intrusion detection (1)
- Inventory systems (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Java (1)
- Kette (1)
- Klausurtagung (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- LSTM (1)
- Leadership (1)
- Learning behavior (1)
- Least privilege principle (1)
- Lecture video recording (1)
- Licenses (1)
- Lightning Network (1)
- Lock-Time-Parameter (1)
- MOOC (1)
- Machine learning (1)
- Management (1)
- Marktübersicht (1)
- Massive Open Online Courses (1)
- Mehr-Faktor-Authentifizierung (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Model-driven SOA Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Multicore Architekturen (1)
- NASDAQ (1)
- NameID (1)
- Namecoin (1)
- Network graph (1)
- Network monitoring (1)
- Network topology (1)
- OAuth (1)
- Off-Chain-Transaktionen (1)
- Onename (1)
- OpenBazaar (1)
- OpenID Connect (1)
- Oracles (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- Outlier detection (1)
- P2P (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Ph.D. Retreat (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Prediction (1)
- Privilege separation concept (1)
- Programming (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Protocols (1)
- Research School (1)
- Resource description framework (1)
- Resource management (1)
- Ripple (1)
- Robust optimization (1)
- Role-based access control (1)
- SAP HANA (1)
- SCP (1)
- SHA (1)
- SOA Security (1)
- SOA Sicherheit (1)
- SPV (1)
- Schule (1)
- Schwierigkeitsgrad (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security-as-a-Service (1)
- Service detection (1)
- Service-oriented Systems Engineering (1)
- Sichere Digitale Identitäten (1)
- Simplified Payment Verification (1)
- Single-Sign-On (1)
- Skalierbarkeit der Blockchain (1)
- Slock.it (1)
- Soft Fork (1)
- Software (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Storj (1)
- The Bitfury Group (1)
- The DAO (1)
- Transaktion (1)
- Trust Management (1)
- Two-Way-Peg (1)
- Unified cloud model (1)
- Unspent Transaction Output (1)
- Verträge (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machine (1)
- Virtualisierung (1)
- Virtualization (1)
- Vulnerability Assessment (1)
- Watson IoT (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- abdominal imaging (1)
- adoption (1)
- altchain (1)
- alternative chain (1)
- application virtualization (1)
- argumentation research (1)
- atomic swap (1)
- attack graph (1)
- attribute assurance (1)
- authentication (1)
- basic cloud storage services (1)
- bidirectional payment channels (1)
- biometrics (1)
- bitcoins (1)
- blockchain (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- chain (1)
- change management (1)
- cloud security (1)
- cognition (1)
- collaboration (1)
- collaborative tagging (1)
- confirmation period (1)
- consensus algorithm (1)
- consensus protocol (1)
- contest period (1)
- contracts (1)
- cross-chain (1)
- cyber humanistic (1)
- decentralized autonomous organization (1)
- design thinking (1)
- dezentrale autonome Organisation (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital education (1)
- digitale Bildung (1)
- distributed ledger technology (1)
- doppelter Hashwert (1)
- double hashing (1)
- e-Learning (1)
- e-lecture (1)
- expertise (1)
- federated voting (1)
- folksonomy (1)
- generative multi-discriminative networks (1)
- hashrate (1)
- identity (1)
- identity broker (1)
- image captioning (1)
- imbalanced learning (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- intelligente Verträge (1)
- inter-chain (1)
- knowledge building (1)
- knowledge management (1)
- künstliche Intelligenz (1)
- leadership (1)
- ledger assets (1)
- management (1)
- market study (1)
- maschinelles Lernen (1)
- medical identity theft (1)
- memory-based clustering (1)
- memory-based correlation (1)
- memory-based databases (1)
- merged mining (1)
- merkle root (1)
- micropayment (1)
- micropayment channels (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- multi factor authentication (1)
- multi-core (1)
- multimodal representations (1)
- mutli-task learning (1)
- nonce (1)
- off-chain transaction (1)
- one-time password (1)
- online course (1)
- online-learning (1)
- organizational change (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- phishing (1)
- public cloud storage services (1)
- quorum slices (1)
- ranking (1)
- resilient architectures (1)
- rootstock (1)
- scalability of blockchain (1)
- scarce tokens (1)
- school (1)
- security chaos engineering (1)
- security risk assessment (1)
- segmentation (1)
- semantic (1)
- sidechain (1)
- smart contracts (1)
- smartphone (1)
- spamming (1)
- steganography (1)
- study (1)
- teamwork (1)
- tele-lab (1)
- tele-teaching (1)
- transaction (1)
- trust (1)
- trust model (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtualization (1)
- wearables (1)
- öffentliche Cloud Speicherdienste (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (46) (remove)
In this article, we discuss the notions of experts and expertise in resource discovery in the context of collaborative tagging systems. We propose that the level of expertise of a user with respect to a particular topic is mainly determined by two factors. First, an expert should possess a high-quality collection of resources, while the quality of a Web resource in turn depends on the expertise of the users who have assigned tags to it, forming a mutual reinforcement relationship. Second, an expert should be one who tends to identify interesting or useful resources before other users discover them, thus bringing these resources to the attention of the community of users. We propose a graph-based algorithm, SPEAR (spamming-resistant expertise analysis and ranking), which implements the above ideas for ranking users in a folksonomy. Our experiments show that our assumptions on expertise in resource discovery, and SPEAR as an implementation of these ideas, allow us to promote experts and demote spammers at the same time, with performance significantly better than the original hypertext-induced topic search algorithm and simple statistical measures currently used in most collaborative tagging systems.
Evaluating creativity of verbal responses or texts is a challenging task due to psychometric issues associated with subjective ratings and the peculiarities of textual data. We explore an approach to objectively assess the creativity of responses in a sentence generation task to 1) better understand what language-related aspects are valued by human raters and 2) further advance the developments toward automating creativity evaluations. Over the course of two prior studies, participants generated 989 four-word sentences based on a four-letter prompt with the instruction to be creative. We developed an algorithm that scores each sentence on eight different metrics including 1) general word infrequency, 2) word combination infrequency, 3) context-specific word uniqueness, 4) syntax uniqueness, 5) rhyme, 6) phonetic similarity, and similarity of 7) sequence spelling and 8) semantic meaning to the cue. The text metrics were then used to explain the averaged creativity ratings of eight human raters. We found six metrics to be significantly correlated with the human ratings, explaining a total of 16% of their variance. We conclude that the creative impression of sentences is partly driven by different aspects of novelty in word choice and syntax, as well as rhythm and sound, which are amenable to objective assessment.
Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.
CloudStrike
(2020)
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
This paper discusses a new approach for designing and deploying Security-as-a-Service (SecaaS) applications using cloud native design patterns. Current SecaaS approaches do not efficiently handle the increasing threats to computer systems and applications. For example, requests for security assessments drastically increase after a high-risk security vulnerability is disclosed. In such scenarios, SecaaS applications are unable to dynamically scale to serve requests. A root cause of this challenge is employment of architectures not specifically fitted to cloud environments. Cloud native design patterns resolve this challenge by enabling certain properties e.g. massive scalability and resiliency via the combination of microservice patterns and cloud-focused design patterns. However adopting these patterns is a complex process, during which several security issues are introduced. In this work, we investigate these security issues, we redesign and deploy a monolithic SecaaS application using cloud native design patterns while considering appropriate, layered security counter-measures i.e. at the application and cloud networking layer. Our prototype implementation out-performs traditional, monolithic applications with an average Scanner Time of 6 minutes, without compromising security. Our approach can be employed for designing secure, scalable and performant SecaaS applications that effectively handle unexpected increase in security assessment requests.
Um den zunehmenden Diebstahl digitaler Identitäten zu bekämpfen, gibt es bereits mehr als ein Dutzend Technologien. Sie sind, vor allem bei der Authentifizierung per Passwort, mit spezifischen Nachteilen behaftet, haben andererseits aber auch jeweils besondere Vorteile. Wie solche Kommunikationsstandards und -Protokolle wirkungsvoll miteinander kombiniert werden können, um dadurch mehr Sicherheit zu erreichen, haben die Autoren dieser Studie analysiert. Sie sprechen sich für neuartige Identitätsmanagement-Systeme aus, die sich flexibel auf verschiedene Rollen eines einzelnen Nutzers einstellen können und bequemer zu nutzen sind als bisherige Verfahren. Als ersten Schritt auf dem Weg hin zu einer solchen Identitätsmanagement-Plattform beschreiben sie die Möglichkeiten einer Analyse, die sich auf das individuelle Verhalten eines Nutzers oder einer Sache stützt.
Ausgewertet werden dabei Sensordaten mobiler Geräte, welche die Nutzer häufig bei sich tragen und umfassend einsetzen, also z.B. internetfähige Mobiltelefone, Fitness-Tracker und Smart Watches. Die Wissenschaftler beschreiben, wie solche Kleincomputer allein z.B. anhand der Analyse von Bewegungsmustern, Positionsund Netzverbindungsdaten kontinuierlich ein „Vertrauens-Niveau“ errechnen können. Mit diesem ermittelten „Trust Level“ kann jedes Gerät ständig die Wahrscheinlichkeit angeben, mit der sein aktueller Benutzer auch der tatsächliche Besitzer ist, dessen typische Verhaltensmuster es genauestens „kennt“.
Wenn der aktuelle Wert des Vertrauens-Niveaus (nicht aber die biometrischen Einzeldaten) an eine externe Instanz wie einen Identitätsprovider übermittelt wird, kann dieser das Trust Level allen Diensten bereitstellen, welche der Anwender nutzt und darüber informieren will. Jeder Dienst ist in der Lage, selbst festzulegen, von welchem Vertrauens-Niveau an er einen Nutzer als authentifiziert ansieht. Erfährt er von einem unter das Limit gesunkenen Trust Level, kann der Identitätsprovider seine Nutzung und die anderer Services verweigern.
Die besonderen Vorteile dieses Identitätsmanagement-Ansatzes liegen darin, dass er keine spezifische und teure Hardware benötigt, um spezifische Daten auszuwerten, sondern lediglich Smartphones und so genannte Wearables. Selbst Dinge wie Maschinen, die Daten über ihr eigenes Verhalten per Sensor-Chip ins Internet funken, können einbezogen werden. Die Daten werden kontinuierlich im Hintergrund erhoben, ohne dass sich jemand darum kümmern muss. Sie sind nur für die Berechnung eines Wahrscheinlichkeits-Messwerts von Belang und verlassen niemals das Gerät. Meldet sich ein Internetnutzer bei einem Dienst an, muss er sich nicht zunächst an ein vorher festgelegtes Geheimnis – z.B. ein Passwort – erinnern, sondern braucht nur die Weitergabe seines aktuellen Vertrauens-Wertes mit einem „OK“ freizugeben.
Ändert sich das Nutzungsverhalten – etwa durch andere Bewegungen oder andere Orte des Einloggens ins Internet als die üblichen – wird dies schnell erkannt. Unbefugten kann dann sofort der Zugang zum Smartphone oder zu Internetdiensten gesperrt werden. Künftig kann die Auswertung von Verhaltens-Faktoren noch erweitert werden, indem z.B. Routinen an Werktagen, an Wochenenden oder im Urlaub erfasst werden. Der Vergleich mit den live erhobenen Daten zeigt dann an, ob das Verhalten in das übliche Muster passt, der Benutzer also mit höchster Wahrscheinlichkeit auch der ausgewiesene Besitzer des Geräts ist.
Über die Techniken des Managements digitaler Identitäten und die damit verbundenen Herausforderungen gibt diese Studie einen umfassenden Überblick. Sie beschreibt zunächst, welche Arten von Angriffen es gibt, durch die digitale Identitäten gestohlen werden können. Sodann werden die unterschiedlichen Verfahren von Identitätsnachweisen vorgestellt. Schließlich liefert die Studie noch eine zusammenfassende Übersicht über die 15 wichtigsten Protokolle und technischen Standards für die Kommunikation zwischen den drei beteiligten Akteuren: Service Provider/Dienstanbieter, Identitätsprovider und Nutzer. Abschließend wird aktuelle Forschung des Hasso-Plattner-Instituts zum Identitätsmanagement vorgestellt.
Multiperiod robust optimization for proactive resource provisioning in virtualized data centers
(2014)
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
Massive Open Online Courses (MOOCs) have left their mark on the face of education during the recent years. At the Hasso Plattner Institute (HPI) in Potsdam, Germany, we are actively developing a MOOC platform, which provides our research with a plethora of e-learning topics, such as learning analytics, automated assessment, peer assessment, team-work, online proctoring, and gamification. We run several instances of this platform. On openHPI, we provide our own courses from within the HPI context. Further instances are openSAP, openWHO, and mooc.HOUSE, which is the smallest of these platforms, targeting customers with a less extensive course portfolio. In 2013, we started to work on the gamification of our platform. By now, we have implemented about two thirds of the features that we initially have evaluated as useful for our purposes. About a year ago we activated the implemented gamification features on mooc.HOUSE. Before activating the features on openHPI as well, we examined, and re-evaluated our initial considerations based on the data we collected so far and the changes in other contexts of our platforms.
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.