Refine
Year of publication
Document Type
- Article (17)
- Monograph/Edited Volume (16)
- Other (10)
- Conference Proceeding (2)
- Part of a Book (1)
Is part of the Bibliography
- yes (46)
Keywords
- Cloud Computing (4)
- E-Learning (3)
- Identitätsmanagement (3)
- MOOCs (3)
- Online-Lernen (3)
- Onlinekurs (3)
- Tele-Lab (3)
- Tele-Teaching (3)
- identity management (3)
- openHPI (3)
- tele-TASK (3)
- Online Course (2)
- Online-Learning (2)
- Privacy (2)
- Security (2)
- Studie (2)
- cloud (2)
- cloud computing (2)
- digital identity (2)
- e-learning (2)
- self-sovereign identity (2)
- ACINQ (1)
- ASIC (1)
- Angriffe (1)
- Anomaly detection (1)
- Anwendungsvirtualisierung (1)
- Attention span (1)
- Attribute aggregation (1)
- Australian securities exchange (1)
- Authentication (1)
- Authentifizierung (1)
- BCCC (1)
- BTC (1)
- Basic Storage Anbieter (1)
- Biometrie (1)
- BitShares (1)
- Bitcoin Core (1)
- Blockchain (1)
- Blockchain Auth (1)
- Blockchain-Konsortium R3 (1)
- Blockchains (1)
- Blockkette (1)
- Blockstack (1)
- Blockstack ID (1)
- Blumix-Plattform (1)
- Blöcke (1)
- Byzantine Agreement (1)
- Change Management (1)
- Cloud (1)
- Cloud Native Applications (1)
- Cloud Storage Broker (1)
- Cloud access control and resource management (1)
- Cloud-Security (1)
- Colored Coins (1)
- Correlation (1)
- Crowd-Resourcing (1)
- DAO (1)
- DPoS (1)
- Data models (1)
- Datenschutz (1)
- Deep learning (1)
- Delegated Proof-of-Stake (1)
- Design Thinking (1)
- Distributed Proof-of-Research (1)
- E-Wallet (1)
- ECDSA (1)
- Electronic prescription (1)
- Energy-aware (1)
- Eris (1)
- Ether (1)
- Ethereum (1)
- Event normalization (1)
- Event processing (1)
- FIDO (1)
- Federated Byzantine Agreement (1)
- FollowMyVote (1)
- Fork (1)
- Forschungskolleg (1)
- Forschungsprojekte (1)
- Future SOC Lab (1)
- Gamification (1)
- Gridcoin (1)
- HITS (1)
- HMM (1)
- HPI Forschung (1)
- HPI research (1)
- Hard Fork (1)
- Hashed Timelock Contracts (1)
- Hasso Plattner Institute (1)
- Hasso-Plattner-Institut (1)
- IDS (1)
- IDS management (1)
- IT-Infrastruktur (1)
- IT-infrastructure (1)
- Identity Management (1)
- Identity management systems (1)
- Identität (1)
- In-Memory Technologie (1)
- In-memory (1)
- Innovation (1)
- Innovationsmanagement (1)
- Innovationsmethode (1)
- Integrity Verification (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Interviews (1)
- Intrusion detection (1)
- Inventory systems (1)
- IoT (1)
- Japanese Blockchain Consortium (1)
- Japanisches Blockchain-Konsortium (1)
- Java (1)
- Kette (1)
- Klausurtagung (1)
- Konsensalgorithmus (1)
- Konsensprotokoll (1)
- LSTM (1)
- Leadership (1)
- Learning behavior (1)
- Least privilege principle (1)
- Lecture video recording (1)
- Licenses (1)
- Lightning Network (1)
- Lock-Time-Parameter (1)
- MOOC (1)
- Machine learning (1)
- Management (1)
- Marktübersicht (1)
- Massive Open Online Courses (1)
- Mehr-Faktor-Authentifizierung (1)
- Micropayment-Kanäle (1)
- Microsoft Azur (1)
- Model-driven SOA Security (1)
- Modell-getriebene SOA-Sicherheit (1)
- Multicore Architekturen (1)
- NASDAQ (1)
- NameID (1)
- Namecoin (1)
- Network graph (1)
- Network monitoring (1)
- Network topology (1)
- OAuth (1)
- Off-Chain-Transaktionen (1)
- Onename (1)
- OpenBazaar (1)
- OpenID Connect (1)
- Oracles (1)
- Organisationsveränderung (1)
- Orphan Block (1)
- Outlier detection (1)
- P2P (1)
- Peer-to-Peer Netz (1)
- Peercoin (1)
- Ph.D. Retreat (1)
- PoB (1)
- PoS (1)
- PoW (1)
- Prediction (1)
- Privilege separation concept (1)
- Programming (1)
- Proof-of-Burn (1)
- Proof-of-Stake (1)
- Proof-of-Work (1)
- Protocols (1)
- Research School (1)
- Resource description framework (1)
- Resource management (1)
- Ripple (1)
- Robust optimization (1)
- Role-based access control (1)
- SAP HANA (1)
- SCP (1)
- SHA (1)
- SOA Security (1)
- SOA Sicherheit (1)
- SPV (1)
- Schule (1)
- Schwierigkeitsgrad (1)
- Secure Digital Identities (1)
- Secure Enterprise SOA (1)
- Security-as-a-Service (1)
- Service detection (1)
- Service-oriented Systems Engineering (1)
- Sichere Digitale Identitäten (1)
- Simplified Payment Verification (1)
- Single-Sign-On (1)
- Skalierbarkeit der Blockchain (1)
- Slock.it (1)
- Soft Fork (1)
- Software (1)
- Steemit (1)
- Stellar Consensus Protocol (1)
- Storj (1)
- The Bitfury Group (1)
- The DAO (1)
- Transaktion (1)
- Trust Management (1)
- Two-Way-Peg (1)
- Unified cloud model (1)
- Unspent Transaction Output (1)
- Verträge (1)
- Virtual Desktop Infrastructure (1)
- Virtual Machine (1)
- Virtualisierung (1)
- Virtualization (1)
- Vulnerability Assessment (1)
- Watson IoT (1)
- Zielvorgabe (1)
- Zookos Dreieck (1)
- Zookos triangle (1)
- abdominal imaging (1)
- adoption (1)
- altchain (1)
- alternative chain (1)
- application virtualization (1)
- argumentation research (1)
- atomic swap (1)
- attack graph (1)
- attribute assurance (1)
- authentication (1)
- basic cloud storage services (1)
- bidirectional payment channels (1)
- biometrics (1)
- bitcoins (1)
- blockchain (1)
- blockchain consortium (1)
- blockchain-übergreifend (1)
- blocks (1)
- blumix platform (1)
- chain (1)
- change management (1)
- cloud security (1)
- cognition (1)
- collaboration (1)
- collaborative tagging (1)
- confirmation period (1)
- consensus algorithm (1)
- consensus protocol (1)
- contest period (1)
- contracts (1)
- cross-chain (1)
- cyber humanistic (1)
- decentralized autonomous organization (1)
- design thinking (1)
- dezentrale autonome Organisation (1)
- difficulty (1)
- difficulty target (1)
- diffusion (1)
- digital education (1)
- digitale Bildung (1)
- distributed ledger technology (1)
- doppelter Hashwert (1)
- double hashing (1)
- e-Learning (1)
- e-lecture (1)
- expertise (1)
- federated voting (1)
- folksonomy (1)
- generative multi-discriminative networks (1)
- hashrate (1)
- identity (1)
- identity broker (1)
- image captioning (1)
- imbalanced learning (1)
- innovation (1)
- innovation capabilities (1)
- innovation management (1)
- intelligente Verträge (1)
- inter-chain (1)
- knowledge building (1)
- knowledge management (1)
- künstliche Intelligenz (1)
- leadership (1)
- ledger assets (1)
- management (1)
- market study (1)
- maschinelles Lernen (1)
- medical identity theft (1)
- memory-based clustering (1)
- memory-based correlation (1)
- memory-based databases (1)
- merged mining (1)
- merkle root (1)
- micropayment (1)
- micropayment channels (1)
- miner (1)
- mining (1)
- mining hardware (1)
- minting (1)
- multi factor authentication (1)
- multi-core (1)
- multimodal representations (1)
- mutli-task learning (1)
- nonce (1)
- off-chain transaction (1)
- one-time password (1)
- online course (1)
- online-learning (1)
- organizational change (1)
- peer-to-peer network (1)
- pegged sidechains (1)
- phishing (1)
- public cloud storage services (1)
- quorum slices (1)
- ranking (1)
- resilient architectures (1)
- rootstock (1)
- scalability of blockchain (1)
- scarce tokens (1)
- school (1)
- security chaos engineering (1)
- security risk assessment (1)
- segmentation (1)
- semantic (1)
- sidechain (1)
- smart contracts (1)
- smartphone (1)
- spamming (1)
- steganography (1)
- study (1)
- teamwork (1)
- tele-lab (1)
- tele-teaching (1)
- transaction (1)
- trust (1)
- trust model (1)
- virtual desktop infrastructure (1)
- virtual groups (1)
- virtualization (1)
- wearables (1)
- öffentliche Cloud Speicherdienste (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (46) (remove)
Evaluating creativity of verbal responses or texts is a challenging task due to psychometric issues associated with subjective ratings and the peculiarities of textual data. We explore an approach to objectively assess the creativity of responses in a sentence generation task to 1) better understand what language-related aspects are valued by human raters and 2) further advance the developments toward automating creativity evaluations. Over the course of two prior studies, participants generated 989 four-word sentences based on a four-letter prompt with the instruction to be creative. We developed an algorithm that scores each sentence on eight different metrics including 1) general word infrequency, 2) word combination infrequency, 3) context-specific word uniqueness, 4) syntax uniqueness, 5) rhyme, 6) phonetic similarity, and similarity of 7) sequence spelling and 8) semantic meaning to the cue. The text metrics were then used to explain the averaged creativity ratings of eight human raters. We found six metrics to be significantly correlated with the human ratings, explaining a total of 16% of their variance. We conclude that the creative impression of sentences is partly driven by different aspects of novelty in word choice and syntax, as well as rhythm and sound, which are amenable to objective assessment.
ATIB
(2021)
Identity management is a principle component of securing online services. In the advancement of traditional identity management patterns, the identity provider remained a Trusted Third Party (TTP). The service provider and the user need to trust a particular identity provider for correct attributes amongst other demands. This paradigm changed with the invention of blockchain-based Self-Sovereign Identity (SSI) solutions that primarily focus on the users. SSI reduces the functional scope of the identity provider to an attribute provider while enabling attribute aggregation. Besides that, the development of new protocols, disregarding established protocols and a significantly fragmented landscape of SSI solutions pose considerable challenges for an adoption by service providers. We propose an Attribute Trust-enhancing Identity Broker (ATIB) to leverage the potential of SSI for trust-enhancing attribute aggregation. Furthermore, ATIB abstracts from a dedicated SSI solution and offers standard protocols. Therefore, it facilitates the adoption by service providers. Despite the brokered integration approach, we show that ATIB provides a high security posture. Additionally, ATIB does not compromise the ten foundational SSI principles for the users.
CloudStrike
(2020)
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
Generative multi-adversarial network for striking the right balance in abdominal image segmentation
(2020)
Purpose: The identification of abnormalities that are relatively rare within otherwise normal anatomy is a major challenge for deep learning in the semantic segmentation of medical images. The small number of samples of the minority classes in the training data makes the learning of optimal classification challenging, while the more frequently occurring samples of the majority class hamper the generalization of the classification boundary between infrequently occurring target objects and classes. In this paper, we developed a novel generative multi-adversarial network, called Ensemble-GAN, for mitigating this class imbalance problem in the semantic segmentation of abdominal images. Method: The Ensemble-GAN framework is composed of a single-generator and a multi-discriminator variant for handling the class imbalance problem to provide a better generalization than existing approaches. The ensemble model aggregates the estimates of multiple models by training from different initializations and losses from various subsets of the training data. The single generator network analyzes the input image as a condition to predict a corresponding semantic segmentation image by use of feedback from the ensemble of discriminator networks. To evaluate the framework, we trained our framework on two public datasets, with different imbalance ratios and imaging modalities: the Chaos 2019 and the LiTS 2017. Result: In terms of the F1 score, the accuracies of the semantic segmentation of healthy spleen, liver, and left and right kidneys were 0.93, 0.96, 0.90 and 0.94, respectively. The overall F1 scores for simultaneous segmentation of the lesions and liver were 0.83 and 0.94, respectively. Conclusion: The proposed Ensemble-GAN framework demonstrated outstanding performance in the semantic segmentation of medical images in comparison with other approaches on popular abdominal imaging benchmarks. The Ensemble-GAN has the potential to segment abdominal images more accurately than human experts.
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach.
Devices on the Internet of Things (IoT) are usually battery-powered and have limited resources. Hence, energy-efficient and lightweight protocols were designed for IoT devices, such as the popular Constrained Application Protocol (CoAP). Yet, CoAP itself does not include any defenses against denial-of-sleep attacks, which are attacks that aim at depriving victim devices of entering low-power sleep modes. For example, a denial-of-sleep attack against an IoT device that runs a CoAP server is to send plenty of CoAP messages to it, thereby forcing the IoT device to expend energy for receiving and processing these CoAP messages. All current security solutions for CoAP, namely Datagram Transport Layer Security (DTLS), IPsec, and OSCORE, fail to prevent such attacks. To fill this gap, Seitz et al. proposed a method for filtering out inauthentic and replayed CoAP messages "en-route" on 6LoWPAN border routers. In this paper, we expand on Seitz et al.'s proposal in two ways. First, we revise Seitz et al.'s software architecture so that 6LoWPAN border routers can not only check the authenticity and freshness of CoAP messages, but can also perform a wide range of further checks. Second, we propose a couple of such further checks, which, as compared to Seitz et al.'s original checks, more reliably protect IoT devices that run CoAP servers from remote denial-of-sleep attacks, as well as from remote exploits. We prototyped our solution and successfully tested its compatibility with Contiki-NG's CoAP implementation.
LoANs
(2019)
Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.
Cloud Storage Broker (CSB) provides value-added cloud storage service for enterprise usage by leveraging multi-cloud storage architecture. However, it raises several challenges for managing resources and its access control in multiple Cloud Service Providers (CSPs) for authorized CSB stakeholders. In this paper we propose unified cloud access control model that provides the abstraction of CSP's services for centralized and automated cloud resource and access control management in multiple CSPs. Our proposal offers role-based access control for CSB stakeholders to access cloud resources by assigning necessary privileges and access control list for cloud resources and CSB stakeholders, respectively, following privilege separation concept and least privilege principle. We implement our unified model in a CSB system called CloudRAID for Business (CfB) with the evaluation result shows it provides system-and-cloud level security service for cfB and centralized resource and access control management in multiple CSPs.
Many universities record the lectures being held in their facilities to preserve knowledge and to make it available to their students and, at least for some universities and classes, to the broad public. The way with the least effort is to record the whole lecture, which in our case usually is 90 min long. This saves the labor and time of cutting and rearranging lectures scenes to provide short learning videos as known from Massive Open Online Courses (MOOCs), etc. Many lecturers fear that recording their lectures and providing them via an online platform might lead to less participation in the actual lecture. Also, many teachers fear that the lecture recordings are not used with the same focus and dedication as lectures in a lecture hall. In this work, we show that in our experience, full lectures have an average watching duration of just a few minutes and explain the reasons for that and why, in most cases, teachers do not have to worry about that.
Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.
Blockchain
(2018)
Der Begriff Blockchain ist in letzter Zeit zu einem Schlagwort geworden, aber nur wenige wissen, was sich genau dahinter verbirgt. Laut einer Umfrage, die im ersten Quartal 2017 veröffentlicht wurde, ist der Begriff nur bei 35 Prozent der deutschen Mittelständler bekannt. Dabei ist die Blockchain-Technologie durch ihre rasante Entwicklung und die globale Eroberung unterschiedlicher Märkte für Massenmedien sehr interessant.
So sehen viele die Blockchain-Technologie entweder als eine Allzweckwaffe, zu der aber nur wenige einen Zugang haben, oder als eine Hacker-Technologie für geheime Geschäfte im Darknet. Dabei liegt die Innovation der Blockchain-Technologie in ihrer erfolgreichen Zusammensetzung bereits vorhandener Ansätze: dezentrale Netzwerke, Kryptographie, Konsensfindungsmodelle. Durch das innovative Konzept wird ein Werte-Austausch in einem dezentralen System möglich. Dabei wird kein Vertrauen zwischen dessen Knoten (z.B. Nutzer) vorausgesetzt.
Mit dieser Studie möchte das Hasso-Plattner-Institut den Lesern helfen, ihren eigenen Standpunkt zur Blockchain-Technologie zu finden und dabei dazwischen unterscheiden zu können, welche Eigenschaften wirklich innovativ und welche nichts weiter als ein Hype sind.
Die Autoren der vorliegenden Arbeit analysieren positive und negative Eigenschaften, welche die Blockchain-Architektur prägen, und stellen mögliche Anpassungs- und Lösungsvorschläge vor, die zu einem effizienten Einsatz der Technologie beitragen können. Jedem Unternehmen, bevor es sich für diese Technologie entscheidet, wird dabei empfohlen, für den geplanten Anwendungszweck zunächst ein klares Ziel zu definieren, das mit einem angemessenen Kosten-Nutzen-Verhältnis angestrebt werden kann. Dabei sind sowohl die Möglichkeiten als auch die Grenzen der Blockchain-Technologie zu beachten. Die relevanten Schritte, die es in diesem Zusammenhang zu beachten gilt, fasst die Studie für die Leser übersichtlich zusammen.
Es wird ebenso auf akute Fragestellungen wie Skalierbarkeit der Blockchain, geeigneter Konsensalgorithmus und Sicherheit eingegangen, darunter verschiedene Arten möglicher Angriffe und die entsprechenden Gegenmaßnahmen zu deren Abwehr. Neue Blockchains etwa laufen Gefahr, geringere Sicherheit zu bieten, da Änderungen an der bereits bestehenden Technologie zu Schutzlücken und Mängeln führen können.
Nach Diskussion der innovativen Eigenschaften und Probleme der Blockchain-Technologie wird auf ihre Umsetzung eingegangen. Interessierten Unternehmen stehen viele Umsetzungsmöglichkeiten zur Verfügung. Die zahlreichen Anwendungen haben entweder eine eigene Blockchain als Grundlage oder nutzen bereits bestehende und weitverbreitete Blockchain-Systeme. Zahlreiche Konsortien und Projekte bieten „Blockchain-as-a-Service“ an und unterstützen andere Unternehmen beim Entwickeln, Testen und Bereitstellen von Anwendungen.
Die Studie gibt einen detaillierten Überblick über zahlreiche relevante Einsatzbereiche und Projekte im Bereich der Blockchain-Technologie. Dadurch, dass sie noch relativ jung ist und sich schnell entwickelt, fehlen ihr noch einheitliche Standards, die Zusammenarbeit der verschiedenen Systeme erlauben und an die sich alle Entwickler halten können. Aktuell orientieren sich Entwickler an Bitcoin-, Ethereum- und Hyperledger-Systeme, diese dienen als Grundlage für viele weitere Blockchain-Anwendungen.
Ziel ist, den Lesern einen klaren und umfassenden Überblick über die Blockchain-Technologie und deren Möglichkeiten zu vermitteln.
Die digitale Entwicklung durchdringt unser Bildungssystem, doch Schulen sind auf die Veränderungen kaum vorbereitet: Überforderte Lehrer/innen, infrastrukturell schwach ausgestattete Unterrichtsräume und unzureichend gewartete Computernetzwerke sind keine Seltenheit. Veraltete Hard- und Software erschweren digitale Bildung in Schulen eher, als dass sie diese ermöglichen: Ein zukunftssicherer Ansatz ist es, die Rechner weitgehend aus den Schulen zu entfernen und Bildungsinhalte in eine Cloud zu überführen.
Zeitgemäßer Unterricht benötigt moderne Technologie und eine zukunftsorientierte Infrastruktur. Eine Schul-Cloud (https://hpi.de/schul-cloud) kann dabei helfen, die digitale Transformation in Schulen zu meistern und den fächerübergreifenden Unterricht mit digitalen Inhalten zu bereichern. Den Schüler/innen und Lehrkräften kann sie viele Möglichkeiten eröffnen: einen einfachen Zugang zu neuesten, professionell gewarteten Anwendungen, die Vernetzung verschiedener Lernorte, Erleichterung von Unterrichtsvorbereitung und Differenzierung. Die Schul-Cloud bietet Flexibilität, fördert die schul- und fächerübergreifende Anwendbarkeit und schafft eine wichtige Voraussetzung für die gesellschaftliche Teilhabe und Mitgestaltung der digitalen Welt. Neben den technischen Komponenten werden im vorliegenden Bericht ausgewählte Dienste der Schul-Cloud exemplarisch beschrieben und weiterführende Schritte aufgezeigt.
Das in Zusammenarbeit mit zahlreichen Expertinnen und Experten am Hasso-Plattner-Institut (HPI) entwickelte und durch das Bundesministerium für Bildung und Forschung (BMBF) geförderte Konzept einer Schul-Cloud stellt eine wichtige Grundlage für die Einführung Cloud-basierter Strukturen und -Dienste im Bildungsbereich dar. Gemeinsam mit dem nationalen Excellence-Schulnetzwerk MINT-EC als Kooperationspartner startet ab sofort die Pilotphase. Aufgrund des modularen, skalierbaren Ansatzes der Schul-Cloud kommt dem infrastrukturellen Prototypen langfristig das Potential zu, auch über die begrenzte Anzahl an Pilotschulen hinaus bundesweit effizient eingesetzt zu werden.
Um den zunehmenden Diebstahl digitaler Identitäten zu bekämpfen, gibt es bereits mehr als ein Dutzend Technologien. Sie sind, vor allem bei der Authentifizierung per Passwort, mit spezifischen Nachteilen behaftet, haben andererseits aber auch jeweils besondere Vorteile. Wie solche Kommunikationsstandards und -Protokolle wirkungsvoll miteinander kombiniert werden können, um dadurch mehr Sicherheit zu erreichen, haben die Autoren dieser Studie analysiert. Sie sprechen sich für neuartige Identitätsmanagement-Systeme aus, die sich flexibel auf verschiedene Rollen eines einzelnen Nutzers einstellen können und bequemer zu nutzen sind als bisherige Verfahren. Als ersten Schritt auf dem Weg hin zu einer solchen Identitätsmanagement-Plattform beschreiben sie die Möglichkeiten einer Analyse, die sich auf das individuelle Verhalten eines Nutzers oder einer Sache stützt.
Ausgewertet werden dabei Sensordaten mobiler Geräte, welche die Nutzer häufig bei sich tragen und umfassend einsetzen, also z.B. internetfähige Mobiltelefone, Fitness-Tracker und Smart Watches. Die Wissenschaftler beschreiben, wie solche Kleincomputer allein z.B. anhand der Analyse von Bewegungsmustern, Positionsund Netzverbindungsdaten kontinuierlich ein „Vertrauens-Niveau“ errechnen können. Mit diesem ermittelten „Trust Level“ kann jedes Gerät ständig die Wahrscheinlichkeit angeben, mit der sein aktueller Benutzer auch der tatsächliche Besitzer ist, dessen typische Verhaltensmuster es genauestens „kennt“.
Wenn der aktuelle Wert des Vertrauens-Niveaus (nicht aber die biometrischen Einzeldaten) an eine externe Instanz wie einen Identitätsprovider übermittelt wird, kann dieser das Trust Level allen Diensten bereitstellen, welche der Anwender nutzt und darüber informieren will. Jeder Dienst ist in der Lage, selbst festzulegen, von welchem Vertrauens-Niveau an er einen Nutzer als authentifiziert ansieht. Erfährt er von einem unter das Limit gesunkenen Trust Level, kann der Identitätsprovider seine Nutzung und die anderer Services verweigern.
Die besonderen Vorteile dieses Identitätsmanagement-Ansatzes liegen darin, dass er keine spezifische und teure Hardware benötigt, um spezifische Daten auszuwerten, sondern lediglich Smartphones und so genannte Wearables. Selbst Dinge wie Maschinen, die Daten über ihr eigenes Verhalten per Sensor-Chip ins Internet funken, können einbezogen werden. Die Daten werden kontinuierlich im Hintergrund erhoben, ohne dass sich jemand darum kümmern muss. Sie sind nur für die Berechnung eines Wahrscheinlichkeits-Messwerts von Belang und verlassen niemals das Gerät. Meldet sich ein Internetnutzer bei einem Dienst an, muss er sich nicht zunächst an ein vorher festgelegtes Geheimnis – z.B. ein Passwort – erinnern, sondern braucht nur die Weitergabe seines aktuellen Vertrauens-Wertes mit einem „OK“ freizugeben.
Ändert sich das Nutzungsverhalten – etwa durch andere Bewegungen oder andere Orte des Einloggens ins Internet als die üblichen – wird dies schnell erkannt. Unbefugten kann dann sofort der Zugang zum Smartphone oder zu Internetdiensten gesperrt werden. Künftig kann die Auswertung von Verhaltens-Faktoren noch erweitert werden, indem z.B. Routinen an Werktagen, an Wochenenden oder im Urlaub erfasst werden. Der Vergleich mit den live erhobenen Daten zeigt dann an, ob das Verhalten in das übliche Muster passt, der Benutzer also mit höchster Wahrscheinlichkeit auch der ausgewiesene Besitzer des Geräts ist.
Über die Techniken des Managements digitaler Identitäten und die damit verbundenen Herausforderungen gibt diese Studie einen umfassenden Überblick. Sie beschreibt zunächst, welche Arten von Angriffen es gibt, durch die digitale Identitäten gestohlen werden können. Sodann werden die unterschiedlichen Verfahren von Identitätsnachweisen vorgestellt. Schließlich liefert die Studie noch eine zusammenfassende Übersicht über die 15 wichtigsten Protokolle und technischen Standards für die Kommunikation zwischen den drei beteiligten Akteuren: Service Provider/Dienstanbieter, Identitätsprovider und Nutzer. Abschließend wird aktuelle Forschung des Hasso-Plattner-Instituts zum Identitätsmanagement vorgestellt.
After almost two decades of development, modern Security Information and Event Management (SIEM) systems still face issues with normalisation of heterogeneous data sources, high number of false positive alerts and long analysis times, especially in large-scale networks with high volumes of security events. In this paper, we present our own prototype of SIEM system, which is capable of dealing with these issues. For efficient data processing, our system employs in-memory data storage (SAP HANA) and our own technologies from the previous work, such as the Object Log Format (OLF) and high-speed event normalisation. We analyse normalised data using a combination of three different approaches for security analysis: misuse detection, query-based analytics, and anomaly detection. Compared to the previous work, we have significantly improved our unsupervised anomaly detection algorithms. Most importantly, we have developed a novel hybrid outlier detection algorithm that returns ranked clusters of anomalies. It lets an operator of a SIEM system to concentrate on the several top-ranked anomalies, instead of digging through an unsorted bundle of suspicious events. We propose to use anomaly detection in a combination with signatures and queries, applied on the same data, rather than as a full replacement for misuse detection. In this case, the majority of attacks will be captured with misuse detection, whereas anomaly detection will highlight previously unknown behaviour or attacks. We also propose that only the most suspicious event clusters need to be checked by an operator, whereas other anomalies, including false positive alerts, do not need to be explicitly checked if they have a lower ranking. We have proved our concepts and algorithms on a dataset of 160 million events from a network segment of a big multinational company and suggest that our approach and methods are highly relevant for modern SIEM systems.
Creation, collection and retention of knowledge in digital communities is an activity that currently requires being explicitly targeted as a secure method of keeping intellectual capital growing in the digital era. In particular, we consider it relevant to analyze and evaluate the empathetic cognitive personalities and behaviors that individuals now have with the change from face-to-face communication (F2F) to computer-mediated communication (CMC) online. This document proposes a cyber-humanistic approach to enhance the traditional SECI knowledge management model. A cognitive perception is added to its cyclical process following design thinking interaction, exemplary for improvement of the method in which knowledge is continuously created, converted and shared. In building a cognitive-centered model, we specifically focus on the effective identification and response to cognitive stimulation of individuals, as they are the intellectual generators and multiplicators of knowledge in the online environment. Our target is to identify how geographically distributed-digital-organizations should align the individual's cognitive abilities to promote iteration and improve interaction as a reliable stimulant of collective intelligence. The new model focuses on analyzing the four different stages of knowledge processing, where individuals with sympathetic cognitive personalities can significantly boost knowledge creation in a virtual social system. For organizations, this means that multidisciplinary individuals can maximize their extensive potential, by externalizing their knowledge in the correct stage of the knowledge creation process, and by collaborating with their appropriate sympathetically cognitive remote peers.
Embedded smart home
(2017)
The popularity of MOOCs has increased considerably in the last years. A typical MOOC course consists of video content, self tests after a video and homework, which is normally in multiple choice format. After solving this homeworks for every week of a MOOC, the final exam certificate can be issued when the student has reached a sufficient score. There are also some attempts to include practical tasks, such as programming, in MOOCs for grading. Nevertheless, until now there is no known possibility to teach embedded system programming in a MOOC course where the programming can be done in a remote lab and where grading of the tasks is additionally possible. This embedded programming includes communication over GPIO pins to control LEDs and measure sensor values. We started a MOOC course called "Embedded Smart Home" as a pilot to prove the concept to teach real hardware programming in a MOOC environment under real life MOOC conditions with over 6000 students. Furthermore, also students with real hardware have the possibility to program on their own real hardware and grade their results in the MOOC course. Finally, we evaluate our approach and analyze the student acceptance of this approach to offer a course on embedded programming. We also analyze the hardware usage and working time of students solving tasks to find out if real hardware programming is an advantage and motivating achievement to support students learning success.
Selection of initial points, the number of clusters and finding proper clusters centers are still the main challenge in clustering processes. In this paper, we suggest genetic algorithm based method which searches several solution spaces simultaneously. The solution spaces are population groups consisting of elements with similar structure. Elements in a group have the same size, while elements in different groups are of different sizes. The proposed algorithm processes the population in groups of chromosomes with one gene, two genes to k genes. These genes hold corresponding information about the cluster centers. In the proposed method, the crossover and mutation operators can accept parents with different sizes; this can lead to versatility in population and information transfer among sub-populations. We implemented the proposed method and evaluated its performance against some random datasets and the Ruspini dataset as well. The experimental results show that the proposed method could effectively determine the appropriate number of clusters and recognize their centers. Overall this research implies that using heterogeneous population in the genetic algorithm can lead to better results.