Image Captioning with Deep Bidirectional LSTMs and Multi-Task Learning
- Generating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness andGenerating a novel and descriptive caption of an image is drawing increasing interests in computer vision, natural language processing, and multimedia communities. In this work, we propose an end-to-end trainable deep bidirectional LSTM (Bi-LSTM (Long Short-Term Memory)) model to address the problem. By combining a deep convolutional neural network (CNN) and two separate LSTM networks, our model is capable of learning long-term visual-language interactions by making use of history and future context information at high-level semantic space. We also explore deep multimodal bidirectional models, in which we increase the depth of nonlinearity transition in different ways to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale, and vertical mirror are proposed to prevent over-fitting in training deep models. To understand how our models "translate" image to sentence, we visualize and qualitatively analyze the evolution of Bi-LSTM internal states over time. The effectiveness and generality of proposed models are evaluated on four benchmark datasets: Flickr8K, Flickr30K, MSCOCO, and Pascal1K datasets. We demonstrate that Bi-LSTM models achieve highly competitive performance on both caption generation and image-sentence retrieval even without integrating an additional mechanism (e.g., object detection, attention model). Our experiments also prove that multi-task learning is beneficial to increase model generality and gain performance. We also demonstrate the performance of transfer learning of the Bi-LSTM model significantly outperforms previous methods on the Pascal1K dataset.…
Verfasserangaben: | Cheng Wang, Haojin YangGND, Christoph MeinelORCiDGND |
---|---|
DOI: | https://doi.org/10.1145/3115432 |
ISSN: | 1551-6857 |
ISSN: | 1551-6865 |
Titel des übergeordneten Werks (Englisch): | ACM transactions on multimedia computing, communications, and applications |
Verlag: | Association for Computing Machinery |
Verlagsort: | New York |
Publikationstyp: | Wissenschaftlicher Artikel |
Sprache: | Englisch |
Datum der Erstveröffentlichung: | 25.04.2018 |
Erscheinungsjahr: | 2018 |
Datum der Freischaltung: | 01.12.2021 |
Freies Schlagwort / Tag: | Deep learning; LSTM; image captioning; multimodal representations; mutli-task learning |
Band: | 14 |
Ausgabe: | 2 |
Seitenanzahl: | 20 |
Organisationseinheiten: | An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH |
DDC-Klassifikation: | 0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke |
0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik | |
Peer Review: | Referiert |