hrvatski jezikClear Cookie - decide language by browser settings

Towards generating and evaluating iconographic image captions of artworks

Cetinić, Eva (2021) Towards generating and evaluating iconographic image captions of artworks. Journal of Imaging, 7 (8). ISSN 2313-433X

PDF - Published Version - article
Available under License Creative Commons Attribution.

Download (2MB) | Preview


To automatically generate accurate and meaningful textual descriptions of images is an ongoing research challenge. Recently, a lot of progress has been made by adopting multimodal deep learning approaches for integrating vision and language. However, the task of developing image captioning models is most commonly addressed using datasets of natural images, while not many contributions have been made in the domain of artwork images. One of the main reasons for that is the lack of large-scale art datasets of adequate image-text pairs. Another reason is the fact that generating accurate descriptions of artwork images is particularly challenging because descriptions of artworks are more complex and can include multiple levels of interpretation. It is therefore also especially difficult to effectively evaluate generated captions of artwork images. The aim of this work is to address some of those challenges by utilizing a large-scale dataset of artwork images annotated with concepts from the Iconclass classification system. Using this dataset, a captioning model is developed by fine-tuning a transformer-based vision-language pretrained model. Due to the complex relations between image and text pairs in the domain of artwork images, the generated captions are evaluated using several quantitative and qualitative approaches. The performance is assessed using standard image captioning metrics and a recently introduced reference-free metric. The quality of the generated captions and the model’s capacity to generalize to new data is explored by employing the model to another art dataset to compare the relation between commonly generated captions and the genre of artworks. The overall results suggest that the model can generate meaningful captions that indicate a stronger relevance to the art historical context, particularly in comparison to captions obtained from models trained only on natural image datasets.

Item Type: Article
Uncontrolled Keywords: mage captioning ; vision-language models ; fine-tuning ; visual art
Subjects: TECHNICAL SCIENCES > Computing
Divisions: Center for Informatics and Computing
Depositing User: Eva Cetinić
Date Deposited: 04 Apr 2022 09:45
DOI: 10.3390/jimaging7080123

Actions (login required)

View Item View Item


Downloads per month over past year

Increase Font
Decrease Font
Dyslexic Font