Refine
Has Fulltext
- no (3) (remove)
Year of publication
- 2021 (3) (remove)
Document Type
- Article (3) (remove)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- networks (3) (remove)
CrashNet
(2021)
Destructive car crash tests are an elaborate, time-consuming, and expensive necessity of the automotive development process. Today, finite element method (FEM) simulations are used to reduce costs by simulating car crashes computationally. We propose CrashNet, an encoder-decoder deep neural network architecture that reduces costs further and models specific outcomes of car crashes very accurately. We achieve this by formulating car crash events as time series prediction enriched with a set of scalar features. Traditional sequence-to-sequence models are usually composed of convolutional neural network (CNN) and CNN transpose layers. We propose to concatenate those with an MLP capable of learning how to inject the given scalars into the output time series. In addition, we replace the CNN transpose with 2D CNN transpose layers in order to force the model to process the hidden state of the set of scalars as one time series. The proposed CrashNet model can be trained efficiently and is able to process scalars and time series as input in order to infer the results of crash tests. CrashNet produces results faster and at a lower cost compared to destructive tests and FEM simulations. Moreover, it represents a novel approach in the car safety management domain.
The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity-duplicates-into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. <br /> We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.
We performed numerical simulations with the Kuramoto model and experiments with oscillatory nickel electrodissolution to explore the dynamical features of the transients from random initial conditions to a fully synchronized (one-cluster) state. The numerical simulations revealed that certain networks (e.g., globally coupled or dense Erdos-Renyi random networks) showed relatively simple behavior with monotonic increase of the Kuramoto order parameter from the random initial condition to the fully synchronized state and that the transient times exhibited a unimodal distribution. However, some modular networks with bridge elements were identified which exhibited non-monotonic variation of the order parameter with local maximum and/or minimum. In these networks, the histogram of the transients times became bimodal and the mean transient time scaled well with inverse of the magnitude of the second largest eigenvalue of the network Laplacian matrix. The non-monotonic transients increase the relative standard deviations from about 0.3 to 0.5, i.e., the transient times became more diverse. The non-monotonic transients are related to generation of phase patterns where the modules are synchronized but approximately anti-phase to each other. The predictions of the numerical simulations were demonstrated in a population of coupled oscillatory electrochemical reactions in global, modular, and irregular tree networks. The findings clarify the role of network structure in generation of complex transients that can, for example, play a role in intermittent desynchronization of the circadian clock due to external cues or in deep brain stimulations where long transients are required after a desynchronization stimulus.