Refine
Year of publication
- 2022 (3) (remove)
Document Type
- Doctoral Thesis (3) (remove)
Language
- English (3)
Is part of the Bibliography
- yes (3) (remove)
Keywords
- machine learning (3) (remove)
Institute
Seismology, like many scientific fields, e.g., music information retrieval and speech signal pro- cessing, is experiencing exponential growth in the amount of data acquired by modern seismo- logical networks. In this thesis, I take advantage of the opportunities offered by "big data" and by the methods developed in the areas of music information retrieval and machine learning to predict better the ground motion generated by earthquakes and to study the properties of the surface layers of the Earth. In order to better predict seismic ground motions, I propose two approaches based on unsupervised deep learning methods, an autoencoder network and Generative Adversarial Networks. The autoencoder technique explores a massive amount of ground motion data, evaluates the required parameters, and generates synthetic ground motion data in the Fourier amplitude spectra (FAS) domain. This method is tested on two synthetic datasets and one real dataset. The application on the real dataset shows that the substantial information contained within the FAS data can be encoded to a four to the five-dimensional manifold. Consequently, only a few independent parameters are required for efficient ground motion prediction. I also propose a method based on Conditional Generative Adversarial Networks (CGAN) for simulating ground motion records in the time-frequency and time domains. CGAN generates the time-frequency domains based on the parameters: magnitude, distance, and shear wave velocities to 30 m depth (VS30). After generating the amplitude of the time-frequency domains using the CGAN model, instead of classical conventional methods that assume the amplitude spectra with a random phase spectrum, the phase of the time-frequency domains is recovered by minimizing the observed and reconstructed spectrograms. In the second part of this dissertation, I propose two methods for the monitoring and characterization of near-surface materials and site effect analyses. I implement an autocorrelation function and an interferometry method to monitor the velocity changes of near-surface materials resulting from the Kumamoto earthquake sequence (Japan, 2016). The observed seismic velocity changes during the strong shaking are due to the non-linear response of the near-surface materials. The results show that the velocity changes lasted for about two months after the Kumamoto mainshock. Furthermore, I used the velocity changes to evaluate the in-situ strain-stress relationship. I also propose a method for assessing the site proxy "VS30" using non-invasive analysis. In the proposed method, a dispersion curve of surface waves is inverted to estimate the shear wave velocity of the subsurface. This method is based on the Dix-like linear operators, which relate the shear wave velocity to the phase velocity. The proposed method is fast, efficient, and stable. All of the methods presented in this work can be used for processing "big data" in seismology and for the analysis of weak and strong ground motion data, to predict ground shaking, and to analyze site responses by considering potential time dependencies and nonlinearities.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.