Rendering acceleration method based on JND and sample gradient

Ripei Zhang , Chunyi Chen

Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (3) : 177 -182.

PDF
Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (3) : 177 -182. DOI: 10.1007/s11801-025-3288-5
Article

Rendering acceleration method based on JND and sample gradient

Author information +
History +
PDF

Abstract

Currently, the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate. It is obvious that this strategy ignores the changes in pixel values during the previous rendering process, which may result in additional iterative operations. To solve this problem, we propose a sampling allocation method based on just noticeable difference (JND) and sample gradient information. Firstly, we obtain the regions in the scene with faster convergence speed by calculating the differences between four sets of pre rendered images, such as environment map regions, light source regions, etc. Afterwards, we use long short term memory (LSTM) to predict the JND information of high-quality rendering results, recorded as JND f. Moreover, during the iterative rendering process, we use JND f and sample gradients to calculate the number of additional samples required for the next iteration. Finally, we use the fixed JND threshold and grayscale change rate as the termination conditions for rendering. After experimental verification, our method’s rendering results have significant advantages in peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM).

Cite this article

Download citation ▾
Ripei Zhang, Chunyi Chen. Rendering acceleration method based on JND and sample gradient. Optoelectronics Letters, 2025, 21(3): 177-182 DOI:10.1007/s11801-025-3288-5

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Constantin J, Andre B, Ibtissam C. Pooling spike neural network for fast rendering in global illumination. Neural computing and applications, 2020, 32(2): 427-446 J]

[2]

Zhang A, Yan Z, Shigang W. Illumination estimation for augmented reality based on a global illumination model. Multimedia tools and applications, 2019, 78: 33487-33503 J]

[3]

Jiang G, Bernhard K. Deep radiance caching: convolutional autoencoders deeper in ray tracing. Computers & graphics, 2021, 94: 22-31 J]

[4]

Yang K, Chen C Y, Hu X J, et al.. Monte Carlo rendering image denoising algorithm using multiple feature non-local mean filter. Journal of system simulation, 2022, 34(6): 1259-1266 [J]

[5]

LANG S Q, CHEN C Y, SHEN Z Y, et al. Adaptive rendering algorithm based on visual-driven gradient field filtering reconstruction[J]. Journal of small microcomputers, 2022: 1–7. (in Chinese)

[6]

Hou Y, Yoon S E. A survey on deep learning-based Monte Carlo denoising. Computational visual media, 2021, 7(2): 17 [J]

[7]

Shen Z Y, Chen C Y, Zhang R P. Perception-JND-driven path tracing for reducing sample budget. The visual computer, 2024, 25: 1-15 [J]

[8]

Xing Q, Chen C Y. Path tracing denoising based on SURE adaptive sampling and neural network. IEEE access, 2020, 8: 116336-116349 J]

[9]

Buisine J, Bigand A, Syanve R, et al.. Stopping criterion during rendering of computer-generated images based on SVD-entropy. Entropy, 2021, 23(1): 75 J]

[10]

Weier M, Stengel M, Roth T, et al.. Perception-driven accelerated rendering. Computer graphics forum, 2017, 36(2): 611-643 J]

[11]

Lin W, Ghina G. Progress and opportunities in modelling just noticeable difference (JND) for multimedia. IEEE transactions on multimedia, 2021, 24: 3706-3721 J]

[12]

Jakaetiya V, Lin W S, Jaiswal S, et al.. Just noticeable difference for natural images using RMS contrast and feed-back mechanism. Neurocomputing, 2018, 275: 330-376 [J]

[13]

Koskela M K, Immonen K V, Viitanen T T, et al.. Instantaneous foveated preview for progressive Monte Carlo rendering. Computational visual media, 2018, 4(3): 267-276 J]

[14]

WANG L, SHI X, LIU Y. Foveated rendering: a state-of-the-art survey[EB/OL]. (2022-11-15) [2023-10-23]. https://arxiv.org/abs/2211.07969.

[15]

Xu S, Chen Z, Zhang H W, et al.. Improved remote sensing image target detection based on YOLOv7. Optoelectronics letters, 2024, 20(4): 234-242 J]

[16]

Zhu Y C, Yang S, Tong J, et al.. Multi-scale detector optimized for small target. Optoelectronics letters, 2024, 20(4): 243-248 J]

[17]

Muellar J H, Thomas N, Philip V, et al.. Temporally adaptive shading reuse for real-time rendering and virtual reality. ACM transactions on graphics, 2021, 40(2): 1-14 J]

[18]

Yang X, Ling W S, Lu Z, et al.. Just noticeable distortion model and its applications in video coding. Signal processing: image communication, 2005, 20(7): 662-680 [J]

[19]

Harvey C, Debattista K, et al.. Multi-modal perception for selective rendering. Computer graphics forum, 2017, 36(1): 172-183 J]

RIGHTS & PERMISSIONS

Tianjin University of Technology

AI Summary AI Mindmap
PDF

214

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/