Filtern
Erscheinungsjahr
- 2023 (6) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (2)
- Dissertation (2)
- Monographie/Sammelband (1)
- Postprint (1)
Sprache
- Englisch (6)
Gehört zur Bibliographie
- ja (6)
Schlagworte
- deep learning (6) (entfernen)
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
PyFin-sentiment
(2023)
Responding to the poor performance of generic automated sentiment analysis solutions on domain-specific texts, we collect a dataset of 10,000 tweets discussing the topics of finance and investing. We manually assign each tweet its market sentiment, i.e., the investor’s anticipation of a stock’s future return. Using this data, we show that all existing sentiment models trained on adjacent domains struggle with accurate market sentiment analysis due to the task’s specialized vocabulary. Consequently, we design, train, and deploy our own sentiment model. It outperforms all previous models (VADER, NTUSD-Fin, FinBERT, TwitterRoBERTa) when evaluated on Twitter posts. On posts from a different platform, our model performs on par with BERT-based large language models. We achieve this result at a fraction of the training and inference costs due to the model’s simple design. We publish the artifact as a python library to facilitate its use by future researchers and practitioners.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Decubitus is one of the most relevant diseases in nursing and the most expensive to treat. It is caused by sustained pressure on tissue, so it particularly affects bed-bound patients. This work lays a foundation for pressure mattress-based decubitus prophylaxis by implementing a solution to the single-frame 2D Human Pose Estimation problem.
For this, methods of Deep Learning are employed. Two approaches are examined, a coarse-to-fine Convolutional Neural Network for direct regression of joint coordinates and a U-Net for the derivation of probability distribution heatmaps.
We conclude that training our models on a combined dataset of the publicly available Bodies at Rest and SLP data yields the best results. Furthermore, various preprocessing techniques are investigated, and a hyperparameter optimization is performed to discover an improved model architecture.
Another finding indicates that the heatmap-based approach outperforms direct regression.
This model achieves a mean per-joint position error of 9.11 cm for the Bodies at Rest data and 7.43 cm for the SLP data.
We find that it generalizes well on data from mattresses other than those seen during training but has difficulties detecting the arms correctly.
Additionally, we give a brief overview of the medical data annotation tool annoto we developed in the bachelor project and furthermore conclude that the Scrum framework and agile practices enhanced our development workflow.
Search for light primordial black holes with VERITAS using gamma γ-ray and optical observations
(2023)
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of four imaging atmospheric Cherenkov telescopes (IACTs). VERITAS is sensitive to very-high-energy gamma-rays in the range of 100 GeV to >30 TeV. Hypothesized primordial black holes (PBHs) are attractive targets for IACTs. If they exist, their potential cosmological impact reaches beyond the candidacy for constituents of dark matter. The sublunar mass window is the largest unconstrained range of PBH masses. This thesis aims to develop novel concepts searching for light PBHs with VERITAS. PBHs below the sublunar window lose mass due to Hawking radiation. They would evaporate at the end of their lifetime, leading to a short burst of gamma-rays. If PBHs formed at about 10^15 g, the evaporation would occur nowadays. Detecting these signals might not only confirm the existence of PBHs but also prove the theory of Hawking radiation. This thesis probes archival VERITAS data recorded between 2012 and 2021 for possible PBH signals. This work presents a new automatic approach to assess the quality of the VERITAS data. The array-trigger rate and far infrared temperature are well suited to identify periods with poor data quality. These are masked by time cuts to obtain a consistent and clean dataset which contains about 4222 hours. The PBH evaporations could occur at any location in the field of view or time within this data. Only a blind search can be performed to identify these short signals. This thesis implements a data-driven deep learning based method to search for short transient signals with VERITAS. It does not depend on the modelling of the effective area and radial acceptance. This work presents the first application of this method to actual observational IACT data. This thesis develops new concepts dealing with the specifics of the data and the transient detection method. These are reflected in the developed data preparation pipeline and search strategies. After correction for trial factors, no candidate PBH evaporation is found in the data. Thus, new constraints of the local rate of PBH evaporations are derived. At the 99% confidence limit it is below <1.07 * 10^5 pc^-3 yr^-1. This constraint with the new, independent analysis approach is in the range of existing limits for the evaporation rate.
This thesis also investigates an alternative novel approach to searching for PBHs with IACTs. Above the sublunar window, the PBH abundance is constrained by optical microlensing studies. The sampling speed, which is of order of minutes to hours for traditional optical telescopes, is a limiting factor in expanding the limits to lower masses. IACTs are also powerful instruments for fast transient optical astronomy with up to O(ns) sampling. This thesis investigates whether IACTs might constrain the sublunar window with optical microlensing observations. This study confirms that, in principle, the fast sampling speed might allow extending microlensing searches into the sublunar mass window. However, the limiting factor for IACTs is the modest sensitivity to detect changes in optical fluxes. This thesis presents the expected rate of detectable events for VERITAS as well as prospects of possible future next-generation IACTs. For VERITAS, the rate of detectable microlensing events in the sublunar range is ~10^-6 per year of observation time. The future prospects for a 100 times more sensitive instrument are at ~0.05 events per year.