Chaotic moving video quality enhancement based on deep in-loop filtering

Tong Tang , Yi Yang , Dapeng Wu , Ruyan Wang , Zhidu Li

›› 2024, Vol. 10 ›› Issue (6) : 1708 -1715.

PDF
›› 2024, Vol. 10 ›› Issue (6) :1708 -1715. DOI: 10.1016/j.dcan.2023.09.001
Research article
research-article

Chaotic moving video quality enhancement based on deep in-loop filtering

Author information +
History +
PDF

Abstract

The Joint Video Experts Team (JVET) has announced the latest generation of the Versatile Video Coding (VVC, H.266) standard. The in-loop filter in VVC inherits the De-Blocking Filter (DBF) and Sample Adaptive Offset (SAO) of High Efficiency Video Coding (HEVC, H.265), and adds the Adaptive Loop Filter (ALF) to minimize the error between the original sample and the decoded sample. However, for chaotic moving video encoding with low bitrates, serious blocking artifacts still remain after in-loop filtering due to the severe quantization distortion of texture details. To tackle this problem, this paper proposes a Convolutional Neural Network (CNN) based VVC in-loop filter for chaotic moving video encoding with low bitrates. First, a blur-aware attention network is designed to perceive the blurring effect and to restore texture details. Then, a deep in-loop filtering method is proposed based on the blur-aware network to replace the VVC in-loop filter. Finally, experimental results show that the proposed method could averagely save 8.3% of bit consumption at similar subjective quality. Meanwhile, under close bit rate consumption, the proposed method could reconstruct more texture information, thereby significantly reducing the blocking artifacts and improving the visual quality.

Keywords

H.266, Versatile Video Coding / Convolutional neural network / In-loop filter

Cite this article

Download citation ▾
Tong Tang, Yi Yang, Dapeng Wu, Ruyan Wang, Zhidu Li. Chaotic moving video quality enhancement based on deep in-loop filtering. , 2024, 10(6): 1708-1715 DOI:10.1016/j.dcan.2023.09.001

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Z. Li, X. Gao, Q. Li, J. Guo, B. Yang, Edge caching enhancement for industrial internet: a recommendation-aided approach, IEEE Int. Things J. 9 (18) (2022) 16941-16952.

[2]

T. Tang, L. Li, X. Wu, R. Chen, H. Li, G. Lu, L. Cheng, TSA-SCC: text semantic-aware screen content coding with ultra low bitrate, IEEE Trans. Image Process. 31 (2022) 2463-2477.

[3]

D. Wu, R. Bao, Z. Li, H. Wang, H. Zhang, R. Wang, Edge-cloud collaboration en-abled video service enhancement: a hybrid human-artificial intelligence scheme, IEEE Trans. Multimed. 23 (2021) 2208-2221.

[4]

M. Zhou, X. Wei, W. Jia, S. Kwong, Joint decision tree and visual feature rate control optimization for VVC UHD coding, IEEE Trans. Image Process. 32 (2023) 219-234.

[5]

B. Bross, Y.-K. Wang, Y. Ye, S. Liu, J. Chen, G.J. Sullivan, J.-R. Ohm, Overview of the versatile video coding (VVC) standard and its applications, IEEE Trans. Circuits Syst. Video Technol. 31 (10) (2021) 3736-3764.

[6]

Z. Tang, H. Wang, X. Yi, Y. Zhang, S. Kwong, C.-C.J. Kuo, Joint graph attention and asymmetric convolutional neural network for deep image compression, IEEE Trans. Circuits Syst. Video Technol. 33 (1) (2023) 421-433.

[7]

Y. Zhao, K. Lin, S. Wang, S. Ma, Joint luma and chroma multi-scale CNN in-loop filter for versatile video coding, in: 2022 IEEE International Symposium on Circuits and Systems (ISCAS), 2022, pp. 3205-3209.

[8]

C. Kuo, X. Xiu, Y. Chen, H. Jhu, W. Chen, N. Yan, X. Wang,Cross-component sample adaptive offset, in:2022 Data Compression Conference (DCC), 2022, pp. 359-368.

[9]

J. Qian, H. Wang, L. Yu,Distortion-based neural network for compression artifacts reduction in VVC, in:2021 International Conference on Visual Communications and Image Processing (VCIP), 2021, pp. 1-5.

[10]

Z. Pan, X. Yi, Y. Zhang, B. Jeon, S. Kwong, Efficient in-loop filtering based on en-hanced deep convolutional neural networks for HEVC, IEEE Trans. Image Process. 29 (2020) 5352-5366.

[11]

Z. Wang, C. Ma, R.-L. Liao, Y. Ye,Multi-density convolutional neural network for in-loop filter in video coding, in:2021 Data Compression Conference (DCC), 2021, pp. 23-32.

[12]

Y. Li, L. Zhang, K. Zhang, Convolutional neural network based in-loop filter for VVC intra coding, in: 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 2104-2108.

[13]

Z. Huang, J. Sun, X. Guo, M. Shang, One-for-all: an efficient variable convolution neural network for in-loop filter of VVC, IEEE Trans. Circuits Syst. Video Technol. 32 (4) (2022) 2342-2355.

[14]

D. Ma, F. Zhang, D.R. Bull, MFRNet: a new CNN architecture for post-processing and in-loop filtering, IEEE J. Sel. Top. Signal Process. 15 (2) (2021) 378-387.

[15]

C.D.K. Pham, C. Fu, J. Zhou,Deep learning based spatial-temporal in-loop filtering for versatile video coding, in:2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 1861-1865.

[16]

Z. Huang, Y. Li, J. Sun, Multi-gradient convolutional neural network based in-loop filter for VVC, in: 2020 IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1-6.

[17]

J. Wang, G. Ding, D. Ding, D. Mukherjee, U. Joshi, Y. Chen, Quadtree-based guided CNN for AV1 in-loop filtering, in: 2022 IEEE International Conference on Image Processing (ICIP), 2022, pp. 3331-3335.

[18]

J. Yue, Y. Gao, S. Li, H. Yuan, F. Dufaux, A global appearance and local coding distortion based fusion framework for CNN based filtering in video coding, IEEE Trans. Broadcast. 68 (2) (2022) 370-382.

[19]

H. Huang, I. Schiopu, A. Munteanu, Frame-wise CNN-based filtering for intra-frame quality enhancement of HEVC videos, IEEE Trans. Circuits Syst. Video Technol. 31 (6) (2021) 2100-2113.

[20]

D. Li, L. Yu, An in-loop filter based on low-complexity CNN using residuals in in-tra video coding, in: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 2019, pp. 1-5.

[21]

J. Yao, L. Wang, F. Chen, C. Lin, S. Pu, An attention residual neural network with recurrent greedy approach as loop filter for inter frames, in: 2019 IEEE International Conference on Multimedia and Expo (ICME), 2019, pp. 1444-1449.

[22]

W.-S. Park, M. Kim, CNN-based in-loop filtering for coding efficiency improvement, in: 2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 2016, pp. 1-5.

[23]

C. Jia, S. Wang, X. Zhang, S. Wang, J. Liu, S. Pu, S. Ma, Content-aware convolutional neural network for in-loop filtering in high efficiency video coding, IEEE Trans. Image Process. 28 (7) (2019) 3343-3356.

[24]

S. Chen, Z. Chen, Y. Wang, S. Liu, In-loop filter with dense residual convolutional neural network for VVC, in: 2020 IEEE Conference on Multimedia Information Pro-cessing and Retrieval (MIPR), 2020, pp. 149-152.

[25]

W. Jia, L. Li, Z. Li, X. Zhang, S. Liu, Residual guided deblocking with deep learn-ing, in: 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 3109-3113.

[26]

S. Zhang, Z. Fan, N. Ling, M. Jiang, Recursive residual convolutional neural network-based in-loop filtering for intra frames, IEEE Trans. Circuits Syst. Video Technol. 30 (7) (2020) 1888-1900.

[27]

B. Kathariya, Z. Li, H. Wang, M. Coban,Multi-stage spatial and frequency feature fu-sion using transformer in CNN-based in-loop filter for VVC, in:2022 Picture Coding Symposium (PCS), 2022, pp. 373-377.

[28]

W. Lin, X. He, X. Han, D. Liu, J. See, J. Zou, H. Xiong, F. Wu, Partition-aware adap-tive switching neural networks for post-processing in HEVC, IEEE Trans. Multimed. 22 (11) (2020) 2749-2763.

[29]

Y. Zhang, T. Shen, X. Ji, Y. Zhang, R. Xiong, Q. Dai, Residual highway convolutional neural networks for in-loop filtering in HEVC, IEEE Trans. Image Process. 27 (8)(2018) 3827-3841.

[30]

C. Liu, H. Sun, J. Katto, X. Zeng, Y. Fan, QA-filter: a QP-adaptive convolutional neural network filter for video coding, IEEE Trans. Image Process. 31 (2022) 3032-3045.

[31]

D. Ding, L. Kong, G. Chen, Z. Liu, Y. Fang, A switchable deep learning approach for in-loop filtering in video coding, IEEE Trans. Circuits Syst. Video Technol. 30 (7)(2020) 1871-1887.

[32]

F.J. Tsai, Y.T. Peng, Y.Y. Lin, C.C. Tsai, C.W. Lin, Banet: Blur-aware attention net-works for dynamic scene deblurring, 2021, https://doi.org /10.48550 /arXiv.2101. 07518.

[33]

C. Trotter, G. Atkinson, M. Sharpe, K. Richardson, A.S. McGough, N. Wright, B. Burville, P. Berggren,NDD20: a large-scale few-shot dolphin dataset for coarse and fine-grained categorisation, arXiv preprint, arXiv : 2005.13359.

[34]

D. Kingma, J. Ba Adam, A method for stochastic optimization, Comput. Sci. (2014), https://doi.org/10.48550/arXiv.1412.6980.

[35]

L. Liu, B. Liu, H. Huang, A.C. Bovik, No-reference image quality assessment based on spatial and spectral entropies, Signal Process. Image Commun. 29 (8) (2014) 856-863.

PDF

243

Accesses

0

Citation

Detail

Sections
Recommended

/