TV100: a TV series dataset that pre-trained CLIP has not seen

Da-Wei ZHOU, Zhi-Hong QI, Han-Jia YE, De-Chuan ZHAN

PDF(1054 KB)
PDF(1054 KB)
Front. Comput. Sci. ›› 2024, Vol. 18 ›› Issue (5) : 185349. DOI: 10.1007/s11704-024-40217-z
Artificial Intelligence
LETTER

TV100: a TV series dataset that pre-trained CLIP has not seen

Author information +
History +

Graphical abstract

Cite this article

Download citation ▾
Da-Wei ZHOU, Zhi-Hong QI, Han-Jia YE, De-Chuan ZHAN. TV100: a TV series dataset that pre-trained CLIP has not seen. Front. Comput. Sci., 2024, 18(5): 185349 https://doi.org/10.1007/s11704-024-40217-z

References

[1]
Floridi L, Chiriatti M. GPT-3: its nature, scope, limits, and consequences. Minds and Machines, 2020, 30(4): 681−694
[2]
Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I. Zero-shot text-to-image generation. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8821−8831
[3]
Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748−8763
[4]
Deng J, Dong W, Socher R, Li L J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009, 248−255
[5]
Zhou D W, Wang Q W, Qi Z H, Ye H J, Zhan D C, Liu Z W. Deep class-incremental learning: a survey. 2023, arXiv preprint arXiv: 2302.03648
[6]
Schuhmann C, Beaumont R, Vencu R, Gordon C, Wightman R, Cherti M, Coombes T, Katta A, Mullis C, Wortsman M, Schramowski P, Kundurthy S, Crowson K, Schmidt L, Kaczmarczyk R, Jitsev J. LAION-5B: an open large-scale dataset for training next generation image-text models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 25278−25294
[7]
Zhou D W, Ye H J, Zhan D C. Learning placeholders for open-set recognition. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 4401−4410
[8]
Zhou D W, Sun H L, Ning J Y, Ye H J, Zhan D C. Continual learning with pre-trained models: a survey. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
[9]
Sun H L, Zhou D W, Ye H J, Zhan D C. PILOT: a pre-trained model-based continual learning toolbox. 2023, arXiv preprint arXiv: 2309.07117
[10]
Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models. 2023, arXiv preprint arXiv: 2309.05922

Acknowledgements

This work was partially supported by the National Science and Technology Major Project (2022ZD0114805), the National Natural Science Foundation of China (Grant Nos. 62376118, 62006112, 62250069, 61921006), and the Collaborative Innovation Center of Novel Software Technology and Industrialization.

Competing interests

The authors declare that they have no competing interests or financial conflicts to disclose.

RIGHTS & PERMISSIONS

2024 Higher Education Press
AI Summary AI Mindmap
PDF(1054 KB)

Accesses

Citations

Detail

Sections
Recommended

/