Face inpainting based on edge confrontation combined with hierarchical gated convolution

Fengwen ZHAI , Zhao ZHOU , Fanglin SUN , Jing JIN

Journal of Measurement Science and Instrumentation ›› 2024, Vol. 15 ›› Issue (1) : 33 -42.

PDF (3839KB)
Journal of Measurement Science and Instrumentation ›› 2024, Vol. 15 ›› Issue (1) :33 -42. DOI: 10.62756/jmsi.1674-8042.2024004
Signal and image processing technology
research-article

Face inpainting based on edge confrontation combined with hierarchical gated convolution

Author information +
History +
PDF (3839KB)

Abstract

Aiming at the problems of edge blur and distortion in the current damaged face image inpainting, a two-stage hierarchical gated convolutional network(HGCN) was proposed and then combined with edge adversarial network for face image inpainting. Firstly, the edge adversarial network was adopted to generate edge images. Secondly, the edge images, the masks and the occluded images were combined to train the generative adversarial network (GAN) model of the HGCN to generate the inpainted face images. In the HGCN, traditional convolution was replaced by gated convolution and the dilated convolution was introduced. The main structure of the HGCN is composed of coarse inpainting module and fine inpainting module. In the coarse inpainting module, the encoder and decoder network structure was used for coarse inpainting. In the fine inpainting module, the attention mechanism was introduced to enhance the feature extraction ability so as to further refine the inpainting results. In the experiment, the Celeba-HQ dataset and NVIDIA irregular mask dataset were used as the training datasets, the gated convolution network and attention module were adopted as comparing networks, and PSNR, SSIM and MAE were used as evaluation indicators.The experimental results demonstrated that for the face images with missing areas less than 20%, the proposed network works better than the two other networks on the above three indicators, and for the face images with missing areas greater than 20%, the proposed network is close to the comparison networks on three indicators. In terms of visual effects, the proposed method also surpasses the two contrasting networks in details. The proposed network can evidently improve the inpainting effect, especially image details.

Keywords

deep learning / face inpainting / hierarchical gated convolutional network(HGCN) / edge confrontation / generative adversarial network(GAN)

Cite this article

Download citation ▾
Fengwen ZHAI, Zhao ZHOU, Fanglin SUN, Jing JIN. Face inpainting based on edge confrontation combined with hierarchical gated convolution. Journal of Measurement Science and Instrumentation, 2024, 15(1): 33-42 DOI:10.62756/jmsi.1674-8042.2024004

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

GUO J T. Research on face image inpainting and editing based on generative adversarial networks. Beijing: Beijing Jiaotong University, 2021: 1-6.

[2]

SHEN L. Research on image inpainting methods based on semantic perception deep model. Hefei: Hefei University of Technology, 2020: 1-9.

[3]

MU Q, XIA L, LI Z L, et al. Depth image hole inpainting method using curvature diffusion and edge reconstruction. Journal of Xi’an University of Science and Technology, 2021, 40(2): 369-376.

[4]

ZHAI D H, ZUO W J, DUAN W X, et al. Image inpainting algorithm based on double-cross curvature-driven diffusion model. Computer Applications, 2013, 33(12): 3536-3539.

[5]

LIU Y. Research and application on image inpainting based on texture synthesis and image segmentation based on fractal. Doctoral Dissertation. Changchun: Jilin University, 2010: 11-70.

[6]

CAI X X. Exemplar-based texture synthesis and its applications research. Xi’an: Xidian University, 2018.

[7]

SHEN Y F, SUN S L, Xu F S, et al. CT Image Reconstruction via nonlocal low-rank regularization and data-driven tight frame. Symmetry, 2021, 13(10): 1873-1873.

[8]

GAO C, LUO Y, WU H, et al. Data-driven image completion for complex objects. Signal Processing: Image Communication, 2017, 57(2): 21-32.

[9]

CHEN M J. Research on nuclear radiation contaminated image enhancement based on total variation and sparsity representation. Mianyang: Southwest University of Science and Technology, 2020: 61-85.

[10]

DU Y. Learning sparse and deep representations for image restoration. Doctoral Dissertation. Guangzhou: South China University of Technology, 2019.

[11]

ZHAO S X, MEN S Y, YUAN L. MobileNet network optimization based on convolutional block attention module. Journal of Measurement Science and Instrumentation, 2021, 13(2): 225-234.

[12]

HOU X D, LIU H R, LIU H P. High-resolution damaged images restoration based on convolutional auto-encoder generative adversarial network. Journal of Image and Graphics, 2021, 27(5): 1645-1656.

[13]

LIU P W, GAO Y, QIN P Y, et al. Generative adversarial network medical mri image super-resolution reconstruction based on multiscale residuals. Journal of North China University (Natural Science Edition), 2021, 42(5): 449-459.

[14]

WANG J Y. Image generation based on generative adversarial networks. Hefei: University of Science and Technology of China, 2021.

[15]

DEEPAK P, PHILIPP K, JEFF D, et al. Context encoders: Feature learning by inpainting//IEEE Conference on Computer Vision and Pattern Recognition, June 26-July, 1, 2016, Las Vegas, USA. Piscataway, N.J.: IEEE, 2016: 2536-2544.

[16]

IIZUKA S, SIMO-SERRA E, ISHIKAWA H. Globally and locally consistent image completion. ACM Transactions on Graphics, 2017, 36(4): 1-14.

[17]

YU J, LIN Z, YANG J, et al. Free-form image inpainting with gated convolution//IEEE/CVF Internationalc Conference on Computer Vision, October 27-November 1, 2019, Seoul, South Korea. Piscataway, N.J.: IEEE, 2019: 4471-4480.

[18]

NAZERI K, NG E, JOSEPH T, et al. Edgeconnect: Generative image inpainting with adversarial edge learning. 2019-01-01[2021-01-01]. 2019: 1901-1917. DOI: 10.48550/arXiv.1901.00212.

[19]

GUO X, YANG H, HUANG D. Image inpainting via conditional texture and structure dual generation//IEEE/CVF International Conference on Computer Vision, October 10-17, 2021, Montreal, CA. Piscataway, N.J.: IEEE, 2021: 14134-14143.

[20]

LIU H, JIANG B, SONG Y, et al. Rethinking image inpainting via a mutual encoder-decoder with feature equalizations//European Conference on Computer Vision, August 23-28, 2020, Glasgow, UK. Heidelberg: Springer, 2020: 725-741.

[21]

WAN G L. Research on the restoration algorithm of cultural relics based on gated convolution and coherent semantic attention mechanism. Doctoral Dissertation. Yinchuan: Ningxia University, 2021.

[22]

GAO J, HUO Z. Algorithm for image inpainting in generative adversarial networks based on gated convolution. Journal of Xidian University, 2022, 49(1): 216-224.

[23]

SU M J, WANG B, LIU B Y. Expression image synthesis using GAN by introducing self-attention mechanism and spectral normalization. Intelligent Computers and Applications, 2022, 12(4): 121-125.

[24]

YANG Y, CAO Z, QI Y. Image inpainting algorithm based on multi-angle constraint and spectral normalization. Computer Applications and Software, 2021, 38(8): 233-239.

[25]

WANG H Y, LI H Y, GAO X J. Research on image restoration method based on structure embedding. Computer Engineering and Applications, 2021, 57(22): 241-246.

[26]

LIU A, LIU X, FAN J, et al. Perceptual-sensitive gan for generating adversarial patches//The 34th AAAI Conference on Artificial Intelligence, February 7-12, 2020, New York, USA. Washington: AAAI Press, 2020: 1028-1035.

[27]

ZHANG X F. Research on Image inpainting algorithm based on attention mechanism. Doctoral Dissertation. Nanjing: Nanjing University of Posts and Telecommunications, 2021.

[28]

CHEN Z N, ZHANG H Y, ZENG N Y, et al. Attention mechanism embedded multi-scale restoration method for blurred image. Journal of Image and Graphics, 2022, 27(5): 1682-1696.

[29]

WANG F, CHENG J, LIU W, et al. Additive margin softmax for face verification. IEEE Signal Processing Letters, 2022, 25(7): 926-930.

[30]

WAN L, TONG X, SHENG M W, et al. Review of image classification based on softmax classifier in deep learning. Navigation and Control, 2019,18(6): 1-9.

PDF (3839KB)

50

Accesses

0

Citation

Detail

Sections
Recommended

/