Unleashing Text-to-Image Diffusion Prior for Zero-Shot Image Captioning

1Sun Yat-sen University, 2HiDream.ai Inc.
ECCV2024
PDF Code Dataset

Abstract

Recently, zero-shot image captioning (ZIC) has gained increasing attention, where only text data is available for training. The remarkable progress in text-to-image diffusion model presents the potential to resolve this task by employing synthetic image-caption pairs generated by this pre-trained prior. Nonetheless, the defective details in the salient regions of the synthetic images introduce semantic misalignment between the synthetic image and text, leading to compromised results. To address this challenge, we propose a novel Patch-wise Cross-modal feature Mix-up (PCM) mechanism to adaptively mitigate the unfaithful contents in a fine-grained manner during training, which can be integrated into most of encoder-decoder frameworks, introducing our PCM-Net. Specifically, for each input image, salient visual concepts in the image are first detected considering the image-text similarity in CLIP space. Next, the patch-wise visual features of the input image are selectively fused with the textual features of the salient visual concepts, leading to a mixed-up feature map with fewer defective elements. Finally, a visual-semantic encoder is exploited to refine the derived feature map, which is further incorporated into the sentence decoder for caption generation. Additionally, to facilitate the model training with synthetic data, a novel CLIP-weighted cross-entropy loss is devised to prioritize the high-quality image-text pairs over the low-quality counterparts. Extensive experiments on MSCOCO and Flickr30k datasets demonstrate the superiority of our PCM-Net compared with state-of-the-art VLMs-based approaches, and the synthetic dataset SynthImgCap is released for further research in vision-language representation learning.

SynthImgCap Dataset

The SynthImgCap dataset introduces fine-grained, high-quality synthetic image-caption pairs for zero-shot (text-only) image captioning research, providing a cost-effective alternative to labor-intensive, human-annotated paired data traditionally used in Vision-and-Language tasks. It can also serve as a benchmark, inspiring the use of extensive web-scale image-text pairs to create even larger synthetic image-text pairs for multimodal learning. Such synthetic data mitigates copyright issues associated with web-crawled datasets, making it more conducive to open-source contributions and academic development.

Utilizing the capabilities of a text-to-image diffusion model, the SynthImgCap dataset generates one synthetic image for each sentence in the training corpus, resulting in 542,401 and 144,541 synthetic image-text pairs for the MSCOCO-SD and Flickr30k-SD datasets, respectively. You can access it on Google Drive or BaiduYun.

Google Drive BaiduYun (extract code: `vpkx`)

Framework

PCM-Net Framework.

BibTeX

@inproceedings{luo2024unleashing,
        title = {Unleashing Text-to-Image Diffusion Prior for Zero-Shot Image Captioning},
        author = {Luo, Jianjie and Chen, Jingwen and Li, Yehao and Pan, Yingwei and Feng, Jianlin and Chao, Hongyang and Yao, Ting},
        booktitle = {European Conference on Computer Vision (ECCV)},
        year = {2024}
}