Perception-entropy-driven temporal reusing for real-time ray tracing

Zhongye Shen , Chunyi Chen , Weixun Yao , Haiyang Yu , Jun Peng

Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (6) : 378 -384.

PDF
Optoelectronics Letters ›› 2025, Vol. 21 ›› Issue (6) : 378 -384. DOI: 10.1007/s11801-025-4129-2
Article

Perception-entropy-driven temporal reusing for real-time ray tracing

Author information +
History +
PDF

Abstract

Although ray tracing produces high-fidelity, realistic images, it is considered computationally burdensome when implemented on a high rendering rate system. Perception-driven rendering techniques generate images with minimal noise and distortion that are generally acceptable to the human visual system, thereby reducing rendering costs. In this paper, we introduce a perception-entropy-driven temporal reusing method to accelerate real-time ray tracing. We first build a just noticeable difference (JND) model to represent the uncertainty of ray samples and image space masking effects. Then, we expand the shading gradient through gradient max-pooling and gradient filtering to enlarge the visual receipt field. Finally, we dynamically optimize reusable time segments to improve the accuracy of temporal reusing. Compared with Monte Carlo ray tracing, our algorithm enhances frames per second (fps) by 1.93× to 2.96× at 8 to 16 samples per pixel, significantly accelerating the Monte Carlo ray tracing process while maintaining visual quality.

Cite this article

Download citation ▾
Zhongye Shen, Chunyi Chen, Weixun Yao, Haiyang Yu, Jun Peng. Perception-entropy-driven temporal reusing for real-time ray tracing. Optoelectronics Letters, 2025, 21(6): 378-384 DOI:10.1007/s11801-025-4129-2

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

BadtJ S. Two algorithms for taking advantage of temporal coherence in ray tracing. The visual computer, 1988, 4(3): 123-132 J]

[2]

HavranV, BittnerJ, SeidelH P. Exploiting temporal coherence in ray casted walk-throughs. Proceedings of the 19th Spring Conference on Computer Graphics, April 24–26, 2003, Budmerice, Slovakia, 2003, New York, ACM: 149-155[C]

[3]

DidykP, EisemannE, RitschelT, et al.. Perceptually-motivated real-time temporal upsampling of 3D content for high-refresh-rate displays. Computer graphics forum, 2010, 29(2): 713-722 J]

[4]

ScherzerD, YangL, MattauschO, et al.. Temporal coherence methods in real-time rendering. Computer graphics forum, 2012, 31(8): 2378-2408 J]

[5]

YangS, ZhaoY, LuoY, et al.. MNSS: neural super-sampling framework for real-time rendering on mobile devices. IEEE transactions on visualization and computer graphics, 2023, 30(7): 4271-4284 J]

[6]

LiJ, ChenZ, WuX, et al.. Neural super-resolution for real-time rendering with radiance demodulation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 17–21, 2024, Seattle, WA, USA, 2024, New York, IEEE: 4357-4367[C]

[7]

GaleaS, DebattistaK, SpinaS. GPU-based selective sparse sampling for interactive high-fidelity rendering. 2014 6th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), September 9–12, 2014, Valletta, Malta, 2014, New York, IEEE: 1-8[C]

[8]

StengelM, GrogorickS, EisemannM, et al.. Adaptive image-space sampling for gaze-contingent real-time rendering. Computer graphics forum, 2016, 35(4): 129-139 J]

[9]

ZhangR, ChenC, ShenZ, et al.. Rendering acceleration based on JND-guided sampling prediction. Multimedia systems, 2024, 30(1): 6 J]

[10]

DonnellyW, WolfeA, BütepageJ, et al.. FAST: filter-adapted spatio-temporal sampling for real-time rendering. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2024, 2024, 7(1): 1-16 C]

[11]

KoraćM, SalaünC, GeorgievI, et al.. Perceptual error optimization for Monte Carlo animation rendering. Proceedings of the 2023 SIGGRAPH Asia Conference, December 12–15, 2023, Sydney, Australia, 2023, New York, ACM: 1-10[C]

[12]

NehabD, SanderP V, LawrenceJ, et al.. Accelerating real-time shading with reverse reprojection caching. Graphics hardware, 2007, 41: 61-62[J]

[13]

YangL, TseY C, SanderP V, et al.. Image-based bidirectional scene reprojection. Proceedings of the 2011 SIGGRAPH Asia Conference, December 12–15, 2011, Hong Kong, China, 2011, New York, ACM: 1-10[C]

[14]

BitterliB, WymanC, PharrM, et al.. Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting. ACM transactions on graphics, 2020, 39(4): 148 J]

[15]

ShahI N, KtA, NarayananP J. Combining resampled importance and projected solid angle samplings for many area light rendering. Proceedings of the 2023 SIGGRAPH Asia Technical Communications, December 12–15, 2023, Sydney, Australia, 2023, New York, ACM: 1-4[C]

[16]

LinD, KettunenM, BitterliB, et al.. Generalized resampled importance sampling: foundations of ReSTIR. ACM transactions on graphics, 2022, 41(4): 1-23 J]

[17]

SawhneyR, LinD, KettunenM, et al.. Decorrelating ReSTIR samplers via MCMC mutations. ACM transactions on graphics, 2024, 43(1): 1-15 J]

[18]

WangL, ShiX, LiuY. Foveated rendering: a state-of-the-art survey. Computational visual media, 2023, 9(2): 195-228 J]

[19]

ShiX, WangL, WeiX, et al.. Foveated photon mapping. IEEE transactions on visualization and computer graphics, 2021, 27(11): 4183-4193 J]

[20]

MAJERCIK Z, MARRS A, SPJUT J, et al. Scaling probe-based real-time dynamic global illumination for production[J]. Journal of computer graphics techniques, 2021, 10(2).

[21]

MyszkowskiK. The visible differences predictor: applications to global illumination problems. Rendering Techniques’ 98: Proceedings of the Eurographics Workshop, June 29–July 1, 1998, Vienna, Austria, 1998, Berlin, Springer Vienna: 223-236 C]

[22]

ShenC. Visual perception-driven dynamic adaptive shading. 2024 4th International Conference on Consumer Electronics and Computer Engineering, January 12–14, 2024, Guangzhou, China, 2024, New York, IEEE: 40-43[C]

[23]

AchantaR, HemamiS, EstradaF, et al.. Frequency-tuned salient region detection. 2009 IEEE Conference on Computer Vision and Pattern Recognition, June 20–25, 2009, Miami, FL, USA, 2009, New York, IEEE: 1597-1604[C]

[24]

HarveyC, DebattistaK, Bashford-RogersT, et al.. Multi-modal perception for selective rendering. Computer graphics forum, 2017, 36(1): 172-183 J]

[25]

DongL, LinW, ZhuC, et al.. Selective rendering with graphical saliency model. 2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis, June 16–17, 2011, Ithaca, NY, USA, 2011, New York, IEEE: 159-164[C]

[26]

QuL, MeyerG W. Perceptually guided polygon reduction. IEEE transactions on visualization and computer graphics, 2008, 14(5): 1015-1029 J]

[27]

SHEN Z, CHEN C, ZHANG R, et al. Perception-JND-driven path tracing for reducing sample budget[J]. The visual computer, 2024: 1–15.

[28]

WeymouthF W. Visual sensory units and the minimum angle of resolution. Optometry and vision science, 1963, 40(9): 550-568 J]

[29]

MantiukR. Gaze-dependent screen space ambient occlusion. International Conference on Computer Vision and Graphics (ICCVG 2018), September 17–19, 2018, Warsaw, Poland, 2018, Berlin, Springer: 16-27[C]

[30]

AnderssonP, NilssonJ, Akenine-MöllerT, et al.. FLIP: a difference evaluator for alternating images. Proceedings of the ACM on computer graphics and interactive techniques, 2020, 3(2): 15 J]

[31]

AhumadaJ A J, PetersonH A. Luminance-model-based DCT quantization for color image compression. Human Vision, Visual Processing, and Digital Display III, February 9–14, 1992, San Jose, CA, USA, 1992, Washington, SPIE: 365-374[C]

[32]

ZhangX H, LinW S, XueP. Improved estimation for just-noticeable visual distortion. Signal processing, 2005, 85(4): 795-808 J]

[33]

ChouC H, LiY C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE transactions on circuits and systems for video technology, 1995, 5(6): 467-476 J]

[34]

YangX, LingW S, LuZ K, et al.. Just noticeable distortion model and its applications in video coding. Signal processing: image communication, 2005, 20(7): 662-680[J]

[35]

MuellerJ H, NeffT, VoglreiterP, et al.. Temporally adaptive shading reuse for real-time rendering and virtual reality. ACM transactions on graphics, 2021, 40(2): 1-14 J]

[36]

NVIDIA. 2017. NVIDIA OptiX AI-accelerated denoiser[EB/OL]. (2017-08-24) [2024-08-07]. https://developer.nvidia.com/optix-denoiser.

[37]

NIR B, KAI-HWA Y, et al. The Falcor rendering framework[EB/OL]. (2023-10-23) [2024-08-07]. https://github.com/NVIDIAGameWorks/Falcor.

RIGHTS & PERMISSIONS

Tianjin University of Technology

AI Summary AI Mindmap
PDF

243

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/