TY - JOUR A1 - Hu, Ting-Li A1 - Cheng, Feng A1 - Xu, Zhen A1 - Chen, Zhong-Zheng A1 - Yu, Lei A1 - Ban, Qian A1 - Li, Chun-Lin A1 - Pan, Tao A1 - Zhang, Bao-Wei T1 - Molecular and morphological evidence for a new species of the genus Typhlomys (Rodentia: Platacanthomyidae) JF - Zoological research : ZR = Dongwuxue-yanjiu : jikan / published by Kunming Institute of Zoology, Chinese Academy of Sciences, Zhongguo Kexueyuan Kunming Dongwu Yanjiusuo zhuban, Dongwuxue-yanjiu Bianji Weiyuanhui bianji N2 - In this study, we reassessed the taxonomic position of Typhlomys (Rodentia: Platacanthomyidae) from Huangshan, Anhui, China, based on morphological and molecular evidence. Results suggested that Typhlomys is comprised of up to six species, including four currently recognized species ( Typhlomys cinereus, T. chapensis, T. daloushanensis, and T. nanus), one unconfirmed candidate species, and one new species ( Typhlomys huangshanensis sp. nov.). Morphological analyses further supported the designation of the Huangshan specimens found at mid-elevations in the southern Huangshan Mountains (600 m to 1 200 m a.s.l.) as a new species. KW - Morphology KW - Phylogenetics KW - Species delimitation KW - Taxonomy Y1 - 2021 U6 - https://doi.org/10.24272/j.issn.2095-8137.2020.132 SN - 2095-8137 VL - 42 IS - 1 SP - 100 EP - 107 PB - Yunnan Renmin Chubanshe CY - Kunming ER - TY - JOUR A1 - Torkura, Kennedy A. A1 - Sukmana, Muhammad Ihsan Haikal A1 - Cheng, Feng A1 - Meinel, Christoph T1 - CloudStrike BT - chaos engineering for security and resiliency in cloud infrastructure JF - IEEE access : practical research, open solutions N2 - Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues. KW - cloud security KW - security chaos engineering KW - resilient architectures KW - security risk assessment Y1 - 2020 U6 - https://doi.org/10.1109/ACCESS.2020.3007338 SN - 2169-3536 VL - 8 SP - 123044 EP - 123060 PB - Institute of Electrical and Electronics EngineersĀ  CY - Piscataway ER - TY - JOUR A1 - Sapegin, Andrey A1 - Jaeger, David A1 - Cheng, Feng A1 - Meinel, Christoph T1 - Towards a system for complex analysis of security events in large-scale networks JF - Computers & security : the international journal devoted to the study of the technical and managerial aspects of computer security N2 - After almost two decades of development, modern Security Information and Event Management (SIEM) systems still face issues with normalisation of heterogeneous data sources, high number of false positive alerts and long analysis times, especially in large-scale networks with high volumes of security events. In this paper, we present our own prototype of SIEM system, which is capable of dealing with these issues. For efficient data processing, our system employs in-memory data storage (SAP HANA) and our own technologies from the previous work, such as the Object Log Format (OLF) and high-speed event normalisation. We analyse normalised data using a combination of three different approaches for security analysis: misuse detection, query-based analytics, and anomaly detection. Compared to the previous work, we have significantly improved our unsupervised anomaly detection algorithms. Most importantly, we have developed a novel hybrid outlier detection algorithm that returns ranked clusters of anomalies. It lets an operator of a SIEM system to concentrate on the several top-ranked anomalies, instead of digging through an unsorted bundle of suspicious events. We propose to use anomaly detection in a combination with signatures and queries, applied on the same data, rather than as a full replacement for misuse detection. In this case, the majority of attacks will be captured with misuse detection, whereas anomaly detection will highlight previously unknown behaviour or attacks. We also propose that only the most suspicious event clusters need to be checked by an operator, whereas other anomalies, including false positive alerts, do not need to be explicitly checked if they have a lower ranking. We have proved our concepts and algorithms on a dataset of 160 million events from a network segment of a big multinational company and suggest that our approach and methods are highly relevant for modern SIEM systems. KW - Intrusion detection KW - SAP HANA KW - In-memory KW - Security KW - Machine learning KW - Anomaly detection KW - Outlier detection Y1 - 2017 U6 - https://doi.org/10.1016/j.cose.2017.02.001 SN - 0167-4048 SN - 1872-6208 VL - 67 SP - 16 EP - 34 PB - Elsevier Science CY - Oxford ER - TY - JOUR A1 - Azodi, Amir A1 - Cheng, Feng A1 - Meinel, Christoph T1 - Event Driven Network Topology Discovery and Inventory Listing Using REAMS JF - Wireless personal communications : an international journal N2 - Network Topology Discovery and Inventory Listing are two of the primary features of modern network monitoring systems (NMS). Current NMSs rely heavily on active scanning techniques for discovering and mapping network information. Although this approach works, it introduces some major drawbacks such as the performance impact it can exact, specially in larger network environments. As a consequence, scans are often run less frequently which can result in stale information being presented and used by the network monitoring system. Alternatively, some NMSs rely on their agents being deployed on the hosts they monitor. In this article, we present a new approach to Network Topology Discovery and Network Inventory Listing using only passive monitoring and scanning techniques. The proposed techniques rely solely on the event logs produced by the hosts and network devices present within a network. Finally, we discuss some of the advantages and disadvantages of our approach. KW - Network topology KW - Inventory systems KW - Network monitoring KW - Network graph KW - Service detection KW - Event processing KW - Event normalization Y1 - 2015 U6 - https://doi.org/10.1007/s11277-015-3061-3 SN - 0929-6212 SN - 1572-834X VL - 94 SP - 415 EP - 430 PB - Springer CY - New York ER - TY - JOUR A1 - Jaeger, David A1 - Graupner, Hendrik A1 - Pelchen, Chris A1 - Cheng, Feng A1 - Meinel, Christoph T1 - Fast Automated Processing and Evaluation of Identity Leaks JF - International journal of parallel programming N2 - The relevance of identity data leaks on the Internet is more present than ever. Almost every week we read about leakage of databases with more than a million users in the news. Smaller but not less dangerous leaks happen even multiple times a day. The public availability of such leaked data is a major threat to the victims, but also creates the opportunity to learn not only about security of service providers but also the behavior of users when choosing passwords. Our goal is to analyze this data and generate knowledge that can be used to increase security awareness and security, respectively. This paper presents a novel approach to the processing and analysis of a vast majority of bigger and smaller leaks. We evolved from a semi-manual to a fully automated process that requires a minimum of human interaction. Our contribution is the concept and a prototype implementation of a leak processing workflow that includes the extraction of digital identities from structured and unstructured leak-files, the identification of hash routines and a quality control to ensure leak authenticity. By making use of parallel and distributed programming, we are able to make leaks almost immediately available for analysis and notification after they have been published. Based on the data collected, this paper reveals how easy it is for criminals to collect lots of passwords, which are plain text or only weakly hashed. We publish those results and hope to increase not only security awareness of Internet users but also security on a technical level on the service provider side. KW - Identity leak KW - Data breach KW - Automated parsing KW - Parallel processing Y1 - 2018 U6 - https://doi.org/10.1007/s10766-016-0478-6 SN - 0885-7458 SN - 1573-7640 VL - 46 IS - 2 SP - 441 EP - 470 PB - Springer CY - New York ER - TY - JOUR A1 - Peng, Junjie A1 - Liu, Danxu A1 - Wang, Yingtao A1 - Zeng, Ying A1 - Cheng, Feng A1 - Zhang, Wenqiang T1 - Weight-based strategy for an I/O-intensive application at a cloud data center JF - Concurrency and computation : practice & experience N2 - Applications with different characteristics in the cloud may have different resources preferences. However, traditional resource allocation and scheduling strategies rarely take into account the characteristics of applications. Considering that an I/O-intensive application is a typical type of application and that frequent I/O accesses, especially small files randomly accessing the disk, may lead to an inefficient use of resources and reduce the quality of service (QoS) of applications, a weight allocation strategy is proposed based on the available resources that a physical server can provide as well as the characteristics of the applications. Using the weight obtained, a resource allocation and scheduling strategy is presented based on the specific application characteristics in the data center. Extensive experiments show that the strategy is correct and can guarantee a high concurrency of I/O per second (IOPS) in a cloud data center with high QoS. Additionally, the strategy can efficiently improve the utilization of the disk and resources of the data center without affecting the service quality of applications. KW - IOPS KW - process scheduling KW - random I KW - O KW - small files KW - weight Y1 - 2018 U6 - https://doi.org/10.1002/cpe.4648 SN - 1532-0626 SN - 1532-0634 VL - 30 IS - 19 PB - Wiley CY - Hoboken ER - TY - JOUR A1 - Roschke, Sebastian A1 - Cheng, Feng A1 - Meinel, Christoph T1 - An alert correlation platform for memory-supported techniques JF - Concurrency and computation : practice & experience N2 - Intrusion Detection Systems (IDS) have been widely deployed in practice for detecting malicious behavior on network communication and hosts. False-positive alerts are a popular problem for most IDS approaches. The solution to address this problem is to enhance the detection process by correlation and clustering of alerts. To meet the practical requirements, this process needs to be finished fast, which is a challenging task as the amount of alerts in large-scale IDS deployments is significantly high. We identifytextitdata storage and processing algorithms to be the most important factors influencing the performance of clustering and correlation. We propose and implement a highly efficient alert correlation platform. For storage, a column-based database, an In-Memory alert storage, and memory-based index tables lead to significant improvements of the performance. For processing, algorithms are designed and implemented which are optimized for In-Memory databases, e.g. an attack graph-based correlation algorithm. The platform can be distributed over multiple processing units to share memory and processing power. A standardized interface is designed to provide a unified view of result reports for end users. The efficiency of the platform is tested by practical experiments with several alert storage approaches, multiple algorithms, as well as a local and a distributed deployment. KW - memory-based correlation KW - memory-based clustering KW - memory-based databases KW - IDS management Y1 - 2012 U6 - https://doi.org/10.1002/cpe.1750 SN - 1532-0626 VL - 24 IS - 10 SP - 1123 EP - 1136 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Roschke, Sebastian A1 - Cheng, Feng A1 - Meinel, Christoph T1 - High-quality attack graph-based IDS correlation JF - Logic journal of the IGPL N2 - Intrusion Detection Systems are widely deployed in computer networks. As modern attacks are getting more sophisticated and the number of sensors and network nodes grow, the problem of false positives and alert analysis becomes more difficult to solve. Alert correlation was proposed to analyse alerts and to decrease false positives. Knowledge about the target system or environment is usually necessary for efficient alert correlation. For representing the environment information as well as potential exploits, the existing vulnerabilities and their Attack Graph (AG) is used. It is useful for networks to generate an AG and to organize certain vulnerabilities in a reasonable way. In this article, a correlation algorithm based on AGs is designed that is capable of detecting multiple attack scenarios for forensic analysis. It can be parameterized to adjust the robustness and accuracy. A formal model of the algorithm is presented and an implementation is tested to analyse the different parameters on a real set of alerts from a local network. To improve the speed of the algorithm, a multi-core version is proposed and a HMM-supported version can be used to further improve the quality. The parallel implementation is tested on a multi-core correlation platform, using CPUs and GPUs. KW - Correlation KW - attack graph KW - HMM KW - multi-core KW - IDS Y1 - 2013 U6 - https://doi.org/10.1093/jigpal/jzs034 SN - 1367-0751 VL - 21 IS - 4 SP - 571 EP - 591 PB - Oxford Univ. Press CY - Oxford ER -