A lightweight convolutional neural network for large-scale Chinese image caption

Dexin Zhao , Ruixue Yang , Shutao Guo

Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (6) : 361 -366.

PDF
Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (6) : 361 -366. DOI: 10.1007/s11801-021-0100-z
Article

A lightweight convolutional neural network for large-scale Chinese image caption

Author information +
History +
PDF

Abstract

Image caption is a high-level task in the area of image understanding, in which most of the models adopt a convolutional neural network (CNN) to extract image features assigning a recurrent neural network (RNN) to generate sentences. Researchers tend to design complex networks with deeper layers to improve the performance of feature extraction in recent years. Increasing the size of the network could obtain features of high quality, but it is not an efficient way in terms of computational cost. A large number of parameters brought by CNN makes the research difficult to apply in human daily life. In order to reduce the information loss of the convolutional process with less cost, we propose a lightweight convolutional neural network, named as Bifurcate-CNN (B-CNN). Furthermore, recent works are devoted to generating captions in English, in this paper, we develop an image caption model that generates descriptions in Chinese. Compared with Inception-v3, the depth of our model is shallower with fewer parameters, and the computational cost is lower. Evaluated on the AI CHALLENGER dataset, we prove that our model can enhance the performance, improving BLEU-4 from 46.1 to 49.9 and CIDEr from 142.5 to 156.6 respectively.

Cite this article

Download citation ▾
Dexin Zhao, Ruixue Yang, Shutao Guo. A lightweight convolutional neural network for large-scale Chinese image caption. Optoelectronics Letters, 2021, 17(6): 361-366 DOI:10.1007/s11801-021-0100-z

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

LiX, UricchioT, BallanL, BertiniM, SnoekCG, BimboAD. ACM Computing Surveys, 2016, 49: 1

[2]

Vinyals O, Toshev A, Bengio S and Erhan D, Show and Tell: A Neural Image Caption Generator, IEEE Conference on Computer Vision and Pattern Recognition, 3156 (2015).

[3]

Jia X, Gavves E, Fernando B and Tuytelaars T, Guiding the Long-Short Term Memory Model for Image Caption Generation, IEEE International Conference on Computer Vision IEEE Computer Society, 2407 (2015).

[4]

Lu J, Yang J, Batra D and Parikh D, Neural Baby Talk, Conference on Computer Vision and Pattern Recognition, 7219 (2018).

[5]

Rennie S J, Marcheret E, Mroueh Y, Ross J and Goel V, Self-Critical Sequence Training for Image Captioning, IEEE Conference on Computer Vision and Pattern Recognition, 7008 (2017).

[6]

YangJ, SunY, LiangJ, RenB, LaiS. Neurocomputing, 2019, 328: 56

[7]

Szegedy C, Vanhoucke V, Ioffe S, Shlens J and Wojna Z, Rethinking the Inception Architecture for Computer Vision, IEEE Conference on Computer Vision and Pattern Recognition, 2818 (2016).

[8]

LiuZ, MaL, WuJ, SunL. Journal of Chinese Information Processing, 2017, 31: 162(in Chinese)

[9]

LanW, WangX, YangG, LiX. Chinese Journal of Computers, 2019, 42: 136(in Chinese)

[10]

ZhaoD, ChangZ, GuoS. Neurocomputing, 2019, 329: 476

[11]

Srivastava R, Greff K and Schmidhuber J, Training Very Deep Networks, Advances in Neural Information Processing Systems, 2368 (2015).

[12]

Kulkarni G, Premraj V, Dhar S, Li S, Choi Y, Berg A and Berg T, Baby Talk: Understanding and Generating Simple Image Descriptions, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2891 (2014).

[13]

Wu J, Zheng H, Zhao B, Li Y, Yan B, Liang R, Wang W, Zhou S, Lin G, Fu Y, Wang Y and Wang Y, Large-Scale Datasets for Going Deeper in Image Understanding, IEEE International Conference on Multimedia and Expo (ICME), 1480 (2019).

[14]

He K, Zhang X, Ren S and Sun Y, Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition, 770 (2014).

[15]

Szegedy C, Ioffe S, Vanhoucke V and Alemi A A, Inception-v4, inception-resnet and the impact of residual connections on learning, AAAI Conference on Artificial Intelligence, 4278 (2017).

[16]

Papineni K, Roukos S, Ward T and Zhu W, Bleu: A Method for Automatic Evaluation of Machine Translation, 40th Annual Meeting of the Association for Computational Linguistics, 311 (2002).

[17]

Banerjee S and Lavie A, METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments, Meeting of the association for computational linguistics, 65 (2005).

[18]

Lin C, ROUGE: A Package for Automatic Evaluation of Summaries, Meeting of the Association for Computational Linguistics, 74 (2004).

[19]

Vedantam R, Zitnick C L and Parikh D, CIDEr: Consensus-Based Image Description Evaluation, Computer Vision and Pattern Recognition, 4566 (2015).

AI Summary AI Mindmap
PDF

274

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/