Fast event-inpainting based on lightweight generative adversarial nets

Sheng Liu , Haohao Cheng , Shengyue Huang , Kun Jin , Huanran Ye

Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (8) : 507 -512.

PDF
Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (8) : 507 -512. DOI: 10.1007/s11801-021-0201-8
Article

Fast event-inpainting based on lightweight generative adversarial nets

Author information +
History +
PDF

Abstract

Event-based cameras generate sparse event streams and capture high-speed motion information, however, as the time resolution increases, the spatial resolution will decrease sharply. Although the generative adversarial network has achieved remarkable results in traditional image restoration, directly using it for event inpainting will obscure the fast response characteristics of the event camera, and the sparsity of the event stream is not fully utilized. To tackle the challenges, an event-inpainting network is proposed. The number and structure of the network are redesigned to adapt to the sparsity of events, and the dimensionality of the convolution is increased to retain more spatiotemporal information. To ensure the time consistency of the inpainting image, an event sequence discriminator is added. The tests on the DHP19 and MVSEC datasets were performed. Compared with the state-of-the-art traditional image inpainting method, the method in this paper reduces the number of parameters by 93.5% and increases the inference speed by 6 times without reducing the quality of the restored image too much. In addition, the human pose estimation experiment also revealed that this model can fill in human motion information in high frame rate scenes.

Cite this article

Download citation ▾
Sheng Liu, Haohao Cheng, Shengyue Huang, Kun Jin, Huanran Ye. Fast event-inpainting based on lightweight generative adversarial nets. Optoelectronics Letters, 2021, 17(8): 507-512 DOI:10.1007/s11801-021-0201-8

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Gallego G., Rebecq H. and Scaramuzza D., A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and optical Flow Estimation, IEEE Conference on Computer Vision and Pattern Recognition, 2018.

[2]

Enrico C., Gemma T., Christopher A.E., Sophie S., Federico C., Luca L., Kynan E. and Tobi D., DHP19: Dynamic Vision Sensor 3D Human Pose Dataset, IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019.

[3]

Cedric S., Nick B. and Robert M., Continuous-Time Intensity Estimation Using Event Cameras, Asian Conference on Computer Vision, 308 (2018).

[4]

Kamyar N., Eric Ng, Tony J., Faisal Z. Qureshi and Mehran E., EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning, IEEE Conference on Computer Vision and Pattern Recognition, 2020.

[5]

Jing-yuan Li, Ning Wang, Le-fei Zhang, Bo Du and Da-cheng Tao, Recurrent Feature Reasoning for Image Inpainting, IEEE Conference on Computer Vision and Pattern Recognition, 7760 (2020).

[6]

Yi W., Ying-cong C., Xin T. and Jia-ya J., VCNet: A Robust Approach to Blind Image Inpainting, IEEE Conference on Computer Vision and Pattern Recognition, 2020.

[7]

Ya-Liang Chang, Zhe Yu Liu, Kuan-Ying Lee and Winston Hsu, Free-Form Image Inpainting with Gated Convolution, International Conference on Computer Vision, 4470 (2019).

[8]

Chen Gao, Ayush Saraf, Jia-Bin Huang and Johannes Kopf, Flow-edge Guided Video Completion, European Conference on Computer Vision, 2020.

[9]

Alex ZihaoZ, DineshT, TolgaO, BerndP, VijayK, KostasD. IEEE Robotics and Automation Letters, 2018, 3: 2032

[10]

PatrickL, ChristophP, TobiD. J Solid-State Circuits, 2008, 43: 566

[11]

PiniS, BorghiG, VezzaniR. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2020, 4: 37

[12]

Alex Zihao Zhu, Liang-zhe Yuan, Kenneth C. and Kostas D., EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras, Robotics: Science and Systems, 2018.

[13]

Kai-ming He, Xiang-yu Zhang, Shao-qing Ren and Jian Sun, Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition, 770 (2016).

[14]

Dmitry U, Andera V and Victor L, Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis, IEEE Conference on Computer Vision and Pattern Recognition, 4105 (2017).

[15]

Phillip I., Jun-Yan Zhu, Ting-hui Zhou and Alexei A. E., Image-to-Image Translation with Conditional Adversarial Networks, IEEE Conference on Computer Vision and Pattern Recognition, 5967 (2017).

[16]

Miyato T., Kataoka T., Koyama M. and Yoshida Y., Spectral Normalization for Generative Adversarial Networks, International Conference on Learning Representations, 2018.

[17]

Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio, Generative Adversarial Nets, Annual Conference on Neural Information Processing Systems, 2672 (2014).

[18]

Johnson J, Alahi A and Fei-Fei L., Perceptual Losses for Real-Time Style Transfer and Super-Resolution, European Conference on Computer Vision, 694 (2016).

[19]

Leon A.G., Alexander S.E. and Matthias B., Image Style Transfer Using Convolutional Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition, 2414 (2016).

[20]

Kingma DP and Ba J., Adam: A Method for Stochastic Optimization, International Conference on Learning Representations, 2015.

[21]

Newell A., Yang K. and Deng J., Stacked Hourglass Networks for Human Pose Estimation, European Conference on Computer Vision, 483 (2016).

AI Summary AI Mindmap
PDF

170

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/