TY - JOUR A1 - Rezaei, Mina A1 - Näppi, Janne J. A1 - Lippert, Christoph A1 - Meinel, Christoph A1 - Yoshida, Hiroyuki T1 - Generative multi-adversarial network for striking the right balance in abdominal image segmentation JF - International journal of computer assisted radiology and surgery N2 - Purpose: The identification of abnormalities that are relatively rare within otherwise normal anatomy is a major challenge for deep learning in the semantic segmentation of medical images. The small number of samples of the minority classes in the training data makes the learning of optimal classification challenging, while the more frequently occurring samples of the majority class hamper the generalization of the classification boundary between infrequently occurring target objects and classes. In this paper, we developed a novel generative multi-adversarial network, called Ensemble-GAN, for mitigating this class imbalance problem in the semantic segmentation of abdominal images. Method: The Ensemble-GAN framework is composed of a single-generator and a multi-discriminator variant for handling the class imbalance problem to provide a better generalization than existing approaches. The ensemble model aggregates the estimates of multiple models by training from different initializations and losses from various subsets of the training data. The single generator network analyzes the input image as a condition to predict a corresponding semantic segmentation image by use of feedback from the ensemble of discriminator networks. To evaluate the framework, we trained our framework on two public datasets, with different imbalance ratios and imaging modalities: the Chaos 2019 and the LiTS 2017. Result: In terms of the F1 score, the accuracies of the semantic segmentation of healthy spleen, liver, and left and right kidneys were 0.93, 0.96, 0.90 and 0.94, respectively. The overall F1 scores for simultaneous segmentation of the lesions and liver were 0.83 and 0.94, respectively. Conclusion: The proposed Ensemble-GAN framework demonstrated outstanding performance in the semantic segmentation of medical images in comparison with other approaches on popular abdominal imaging benchmarks. The Ensemble-GAN has the potential to segment abdominal images more accurately than human experts. KW - imbalanced learning KW - generative multi-discriminative networks KW - semantic KW - segmentation KW - abdominal imaging Y1 - 2020 U6 - https://doi.org/10.1007/s11548-020-02254-4 SN - 1861-6410 SN - 1861-6429 VL - 15 IS - 11 SP - 1847 EP - 1858 PB - Springer CY - Berlin ER - TY - GEN A1 - Elsaid, Mohamed Esam A1 - Shawish, Ahmed A1 - Meinel, Christoph T1 - Enhanced cost analysis of multiple virtual machines live migration in VMware environments T2 - 2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2) N2 - Live migration is an important feature in modern software-defined datacenters and cloud computing environments. Dynamic resource management, load balance, power saving and fault tolerance are all dependent on the live migration feature. Despite the importance of live migration, the cost of live migration cannot be ignored and may result in service availability degradation. Live migration cost includes the migration time, downtime, CPU overhead, network and power consumption. There are many research articles that discuss the problem of live migration cost with different scopes like analyzing the cost and relate it to the parameters that control it, proposing new migration algorithms that minimize the cost and also predicting the migration cost. For the best of our knowledge, most of the papers that discuss the migration cost problem focus on open source hypervisors. For the research articles focus on VMware environments, none of the published articles proposed migration time, network overhead and power consumption modeling for single and multiple VMs live migration. In this paper, we propose empirical models for the live migration time, network overhead and power consumption for single and multiple VMs migration. The proposed models are obtained using a VMware based testbed. Y1 - 2018 SN - 978-1-7281-0236-8 U6 - https://doi.org/10.1109/SC2.2018.00010 SP - 16 EP - 23 PB - IEEE CY - New York ER - TY - GEN A1 - Bin Tareaf, Raad A1 - Berger, Philipp A1 - Hennig, Patrick A1 - Meinel, Christoph T1 - Personality exploration system for online social networks BT - Facebook brands as a use case T2 - 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI) N2 - User-generated content on social media platforms is a rich source of latent information about individual variables. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. The proposed model reported significant accuracy in predicting specific personality traits form brands. For evaluating our prediction results on actual brands, we crawled the Facebook API for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions. KW - Big Five Model KW - Brand Personality KW - Personality Prediction KW - Machine Learning KW - Social Media Analysis Y1 - 2019 SN - 978-1-5386-7325-6 U6 - https://doi.org/10.1109/WI.2018.00-76 SP - 301 EP - 309 PB - IEEE CY - New York ER - TY - BOOK A1 - Meinel, Christoph A1 - Sack, Harald T1 - Internetworking : technische Grundlagen und Anwendungen Y1 - 2012 SN - 978-3-540-92939-0 U6 - https://doi.org/10.1007/978-3-540-92940-6 PB - Springer-Verlag Berlin Heidelberg CY - Berlin, Heidelberg ER - TY - GEN A1 - Sianipar, Johannes Harungguan A1 - Willems, Christian A1 - Meinel, Christoph T1 - Virtual machine integrity verification in Crowd-Resourcing Virtual Laboratory T2 - 2018 IEEE 11th Conference on Service-Oriented Computing and Applications (SOCA) N2 - In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM. KW - Virtual Machine KW - Integrity Verification KW - Crowd-Resourcing KW - Cloud Computing Y1 - 2019 SN - 978-1-5386-9133-5 U6 - https://doi.org/10.1109/SOCA.2018.00032 SN - 2163-2871 SP - 169 EP - 176 PB - IEEE CY - New York ER - TY - GEN A1 - Podlesny, Nikolai Jannik A1 - Kayem, Anne V. D. M. A1 - Meinel, Christoph T1 - Attribute Compartmentation and Greedy UCC Discovery for High-Dimensional Data Anonymisation T2 - Proceedings of the Ninth ACM Conference on Data and Application Security and Privacy N2 - High-dimensional data is particularly useful for data analytics research. In the healthcare domain, for instance, high-dimensional data analytics has been used successfully for drug discovery. Yet, in order to adhere to privacy legislation, data analytics service providers must guarantee anonymity for data owners. In the context of high-dimensional data, ensuring privacy is challenging because increased data dimensionality must be matched by an exponential growth in the size of the data to avoid sparse datasets. Syntactically, anonymising sparse datasets with methods that rely of statistical significance, makes obtaining sound and reliable results, a challenge. As such, strong privacy is only achievable at the cost of high information loss, rendering the data unusable for data analytics. In this paper, we make two contributions to addressing this problem from both the privacy and information loss perspectives. First, we show that by identifying dependencies between attribute subsets we can eliminate privacy violating attributes from the anonymised dataset. Second, to minimise information loss, we employ a greedy search algorithm to determine and eliminate maximal partial unique attribute combinations. Thus, one only needs to find the minimal set of identifying attributes to prevent re-identification. Experiments on a health cloud based on the SAP HANA platform using a semi-synthetic medical history dataset comprised of 109 attributes, demonstrate the effectiveness of our approach. Y1 - 2019 SN - 978-1-4503-6099-9 U6 - https://doi.org/10.1145/3292006.3300019 SP - 109 EP - 119 PB - Association for Computing Machinery CY - New York ER - TY - GEN A1 - Klieme, Eric A1 - Tietz, Christian A1 - Meinel, Christoph T1 - Beware of SMOMBIES BT - Verification of Users based on Activities while Walking T2 - The 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom 2018)/the 12th IEEE International Conference on Big Data Science and Engineering (IEEE BigDataSE 2018) N2 - Several research evaluated the user's style of walking for the verification of a claimed identity and showed high authentication accuracies in many settings. In this paper we present a system that successfully verifies a user's identity based on many real world smartphone placements and yet not regarded interactions while walking. Our contribution is the distinction of all considered activities into three distinct subsets and a specific one-class Support Vector Machine per subset. Using sensor data of 30 participants collected in a semi-supervised study approach, we prove that unsupervised verification is possible with very low false-acceptance and false-rejection rates. We furthermore show that these subsets can be distinguished with a high accuracy and demonstrate that this system can be deployed on off-the-shelf smartphones. KW - gait KW - authentication KW - smartphone KW - activities KW - verification KW - behavioral KW - continuous Y1 - 2018 SN - 978-1-5386-4387-7 SN - 978-1-5386-4389-1 U6 - https://doi.org/10.1109/TrustCom/BigDataSE.2018.00096 SN - 2324-9013 SP - 651 EP - 660 PB - IEEE CY - New York ER - TY - JOUR A1 - Thienen, Julia von A1 - Noweski, Christine A1 - Meinel, Christoph A1 - Lang, Sabine A1 - Nicolai, Claudia A1 - Bartz, Andreas T1 - What can design thinking learn from behavior group theraphy? Y1 - 2012 SN - 978-3-642-31990-7 ER - TY - GEN A1 - Bin Tareaf, Raad A1 - Berger, Philipp A1 - Hennig, Patrick A1 - Meinel, Christoph T1 - ASEDS BT - Towards automatic social emotion detection system using facebook reactions T2 - IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS)) N2 - The Massive adoption of social media has provided new ways for individuals to express their opinion and emotion online. In 2016, Facebook introduced a new reactions feature that allows users to express their psychological emotions regarding published contents using so-called Facebook reactions. In this paper, a framework for predicting the distribution of Facebook post reactions is presented. For this purpose, we collected an enormous amount of Facebook posts associated with their reactions labels using the proposed scalable Facebook crawler. The training process utilizes 3 million labeled posts for more than 64,000 unique Facebook pages from diverse categories. The evaluation on standard benchmarks using the proposed features shows promising results compared to previous research. The final model is able to predict the reaction distribution on Facebook posts with a recall score of 0.90 for "Joy" emotion. KW - Emotion Mining KW - Psychological Emotions KW - Machine Learning KW - Social Media Analysis KW - Natural Language Processing Y1 - 2018 SN - 978-1-5386-6614-2 U6 - https://doi.org/10.1109/HPCC/SmartCity/DSS.2018.00143 SP - 860 EP - 866 PB - IEEE CY - New York ER - TY - JOUR A1 - Wang, Cheng A1 - Yang, Haojin A1 - Meinel, Christoph T1 - Image Captioning with Deep Bidirectional LSTMs and Multi-Task Learning JF - ACM transactions on multimedia computing, communications, and applications N2 - Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset. KW - Deep learning KW - LSTM KW - multimodal representations KW - image captioning KW - mutli-task learning Y1 - 2018 U6 - https://doi.org/10.1145/3115432 SN - 1551-6857 SN - 1551-6865 VL - 14 IS - 2 PB - Association for Computing Machinery CY - New York ER -