• Treffer 3 von 17
Zurück zur Trefferliste

Deep Distributional Sequence Embeddings Based on a Wasserstein Loss

  • Deep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. This paper studies deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. The motivation for this is to better capture statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. The paper therefore proposes a distance metric based on Wasserstein distances between the distributions and a corresponding loss function forDeep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. This paper studies deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. The motivation for this is to better capture statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. The paper therefore proposes a distance metric based on Wasserstein distances between the distributions and a corresponding loss function for metric learning, which leads to a novel end-to-end trainable embedding model. We empirically observe that distributional embeddings outperform standard vector embeddings and that training with the proposed Wasserstein metric outperforms training with other distance functions.zeige mehrzeige weniger

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Verfasserangaben:Ahmed AbdelwahabGND, Niels LandwehrORCiDGND
DOI:https://doi.org/10.1007/s11063-022-10784-y
ISSN:1370-4621
ISSN:1573-773X
Titel des übergeordneten Werks (Englisch):Neural processing letters
Verlag:Springer
Verlagsort:Dordrecht
Publikationstyp:Wissenschaftlicher Artikel
Sprache:Englisch
Datum der Erstveröffentlichung:18.03.2022
Erscheinungsjahr:2022
Datum der Freischaltung:22.02.2024
Freies Schlagwort / Tag:Deep learning; Metric learning; Sequence embeddings
Seitenanzahl:21
Fördernde Institution:German Research Foundation [LA3270/1-1]
Organisationseinheiten:Mathematisch-Naturwissenschaftliche Fakultät / Institut für Informatik und Computational Science
DDC-Klassifikation:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke
Peer Review:Referiert
Publikationsweg:Open Access / Hybrid Open-Access
Lizenz (Deutsch):License LogoCC-BY - Namensnennung 4.0 International
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.