A survey on 3D editing based on NeRF and 3DGS

Chen-Yang ZHU , Xin-Yao LIU , Kai XU , Ren-Jiao YI

Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (4) : 2004701

PDF (8185KB)
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (4) : 2004701 DOI: 10.1007/s11704-025-41176-9
Excellent Young Computer Scientists Forum
REVIEW ARTICLE

A survey on 3D editing based on NeRF and 3DGS

Author information +
History +
PDF (8185KB)

Abstract

In recent years, 3D editing has become a significant research topic, primarily due to its ability to manipulate 3D assets in ways that fulfill the growing demand for personalized customization. The advent of radiance field-based methods, exemplified by pioneering frameworks such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), represents a pivotal innovation in scene representation and novel view synthesis, greatly enhancing the effectiveness and efficiency of 3D editing. This survey provides a comprehensive overview of the current advancements in 3D editing based on NeRF and 3DGS, systematically categorizing existing methods according to specific editing tasks while analyzing the current challenges and potential research directions. Our goal through this survey is to offer a comprehensive and valuable resource for researchers in the field, encouraging innovative ideas that may drive further progress in 3D editing.

Graphical abstract

Keywords

3D editing / NeRF / 3DGS

Cite this article

Download citation ▾
Chen-Yang ZHU, Xin-Yao LIU, Kai XU, Ren-Jiao YI. A survey on 3D editing based on NeRF and 3DGS. Front. Comput. Sci., 2026, 20(4): 2004701 DOI:10.1007/s11704-025-41176-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Nieto J R, Susín A. Cage based deformations: a survey. In: Hidalgo M G, Torres A M, Gómez J V, eds. Deformation Models: Tracking, Animation and Applications. Dordrecht: Springer, 2013, 75−99

[2]

Gao L, Lai Y K, Yang J, Zhang L X, Xia S, Kobbelt L. Sparse data driven mesh deformation. IEEE Transactions on Visualization and Computer Graphics, 2021, 27( 3): 2085–2100

[3]

Wang Y, Aigerman N, Kim V G, Chaudhuri S, Sorkine-Hornung O. Neural cages for detail-preserving 3D deformations. In: Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 75−83

[4]

Mullen T. Mastering blender. John Wiley & Sons, 2011

[5]

Hu J Y. The application of computer software—3d studio max, lightscape and v-ray in the environmental artistic expression. Advanced Materials Research, 2013, 631: 1379–1384

[6]

Mildenhall B, Srinivasan P P, Tancik M, Barron J T, Ramamoorthi R, Ng R. NeRF: representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 2022, 65( 1): 99–106

[7]

Kerbl B, Kopanas G, Leimkuehler T, Drettakis G. 3D Gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (TOG), 2023, 42( 4): 139

[8]

Kim G, Youwang K, Oh T H. FPRF: feed-forward photorealistic style transfer of large-scale 3D neural radiance fields. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 2750−2758

[9]

Zhang D, Yuan Y J, Chen Z, Zhang F L, He Z, Shan S, Gao L. StylizedGS: controllable stylization for 3D Gaussian splatting. 2024, arXiv preprint arXiv: 2404.05220

[10]

Kobayashi S, Matsumoto E, Sitzmann V. Decomposing NeRF for editing via feature field distillation. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1694

[11]

Yang B, Zhang Y, Xu Y, Li Y, Zhou H, Bao H, Zhang G, Cui Z. Learning object-compositional neural radiance field for editable scene rendering. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 13779−13788

[12]

Yuan Y J, Sun Y T, Lai Y K, Ma Y, Jia R, Gao L. NeRF-Editing: geometry editing of neural radiance fields. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 18353−18364

[13]

Xu T, Harada T. Deforming radiance fields with cages. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 159−175

[14]

Waczyńska J, Borycki P, Tadeja S, Tabor J, Spurek P. GaMeS: mesh-based adapting and modification of Gaussian splatting. 2024, arXiv preprint arXiv: 2402.01459

[15]

Mirzaei A, Aumentado-Armstrong T, Derpanis K G, Kelly J, Brubaker M A, Gilitschenski I, Levinshtein A. SPIn-NeRF: multiview segmentation and perceptual inpainting with neural radiance fields. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 20669−20679

[16]

Liu Z, Ouyang H, Wang Q, Cheng K L, Xiao J, Zhu K, Xue N, Liu Y, Shen Y, Cao Y. InFusion: inpainting 3D Gaussians via learning depth completion from diffusion prior. 2024, arXiv preprint arXiv: 2404.11613

[17]

Weber E, Holynski A, Jampani V, Saxena S, Snavely N, Kar A, Kanazawa A. NeRFiller: completing scenes via generative 3D inpainting. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 20731−20741

[18]

Haque A, Tancik M, Efros A A, Holynski A, Kanazawa A. Instruct-NeRF2NeRF: editing 3D scenes with instructions. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 19740−19750

[19]

Zhuang J, Wang C, Lin L, Liu L, Li G. DreamEditor: text-driven 3D scene editing with neural fields. In: Proceedings of the SIGGRAPH Asia 2023 Conference Papers. 2023, 26

[20]

Li Y, Dou Y, Shi Y, Lei Y, Chen X, Zhang Y, Zhou P, Ni B. FocalDreamer: text-driven 3D editing via focal-fusion assembly. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 3279−3287

[21]

Zhuang J, Kang D, Cao Y P, Li G, Lin L, Shan Y. TIP-Editor: an accurate 3D editor following both text-prompts and image-prompts. ACM Transactions on Graphics (TOG), 2024, 43( 4): 121

[22]

Sella E, Fiebelman G, Hedman P, Averbuch-Elor H. Vox-E: text-guided voxel editing of 3D objects. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 430−440

[23]

He R, Huang S, Nie X, Hui T, Liu L, Dai J, Han J, Li G, Liu S. Customize your NeRF: adaptive source driven 3D scene editing via local-global iterative training. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 6966−6975

[24]

Sun Y, Tian R, Han X, Liu X, Zhang Y, Xu K. GSEditPro: 3D Gaussian splatting editing with attention-based progressive localization. Computer Graphics Forum, 2024, 43( 7): e15215

[25]

Xu T, Chen J, Chen P, Zhang Y, Yu J, Yang W. TIGER: text-instructed 3D Gaussian retrieval and coherent editing. 2024, arXiv preprint arXiv: 2405.14455

[26]

Mendiratta M, Pan X, Elgharib M, Teotia K, Mallikarjun B R, Tewari A, Golyanik V, Kortylewski A, Theobalt C. AvatarStudio: text-driven editing of 3D dynamic human head avatars. ACM Transactions on Graphics (ToG), 2023, 42( 6): 1–18

[27]

Liu X, Xue H, Luo K, Tan P, Yi L. GenN2N: generative NeRF2NeRF translation. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 5105−5114

[28]

Dong J, Wang Y X. ViCA-NeRF: view-consistency-aware 3D editing of neural radiance fields. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 2686

[29]

Khalid U, Iqbal H, Karim N, Tayyab M, Hua J, Chen C. LatentEditor: text driven local editing of 3D scenes. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 364−380

[30]

Wang J, Fang J, Zhang X, Xie L, Tian Q. GaussianEditor: editing 3D Gaussians delicately with text instructions. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 20902−20911

[31]

Chen Y, Chen Z, Zhang C, Wang F, Yang X, Wang Y, Cai Z, Yang L, Liu H, Lin G. GaussianEditor: swift and controllable 3D editing with Gaussian splatting. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 21476−21485

[32]

Wang Y, Yi X, Wu Z, Zhao N, Chen L, Zhang H. View-consistent 3D editing with Gaussian splatting. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 404−420

[33]

Wu J, Bian J W, Li X, Wang G, Reid I, Torr P, Prisacariu V A. GaussCtrl: multi-view consistent text-driven 3D Gaussian splatting editing. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 55−71

[34]

Gao X, Xiao H, Zhong C, Hu S, Guo Y, Zhang J. Portrait video editing empowered by multimodal generative priors. In: Proceedings of the SIGGRAPH Asia 2024 Conference Papers. 2024, 104

[35]

Wang D, Zhang T, Abboud A, Susstrunk S. InNeRF360: text-guided 3D-consistent object inpainting on 360° neural radiance fields. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 12677−12686

[36]

Prabhu K, Wu J, Tsai L, Hedman P, Goldman D B, Poole B, Broxton M. Inpaint3D: 3D scene content generation using 2D inpainting diffusion. 2023, arXiv preprint arXiv: 2312.03869

[37]

Lin C H, Kim C, Huang J B, Li Q, Ma C Y, Kopf J, Yang M H, Tseng H Y. Taming latent diffusion model for neural radiance field inpainting. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 149−165

[38]

Mirzaei A, De Lutio R, Kim S W, Acuna D, Kelly J, Fidler S, Gilitschenski I, Gojcic Z. RefFusion: reference adapted diffusion models for 3D scene inpainting. 2024, arXiv preprint arXiv: 2404.10765

[39]

Cao C, Yu C, Wang F, Xue X, Fu Y. MVInpainter: learning multi-view consistent inpainting to bridge 2D and 3D editing. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024

[40]

Chen J K, Lyu J, Wang Y X. NeuralEditor: editing neural radiance fields via manipulating point clouds. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 12439−12448

[41]

Garbin S J, Kowalski M, Estellers V, Szymanowicz S, Rezaeifar S, Shen J, Johnson M A, Valentin J. VolTeMorph: real-time, controllable and generalizable animation of volumetric representations. Computer Graphics Forum, 2024, 43( 6): e15117

[42]

Jambon C, Kerbl B, Kopanas G, Diolatzis S, Leimkühler T, Drettakis G. NeRFshop: interactive editing of neural radiance fields. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2023, 6( 1): 1

[43]

Guédon A, Lepetit V. SuGaR: surface-aligned Gaussian splatting for efficient 3D mesh reconstruction and high-quality mesh rendering. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 5354−5363

[44]

Gao L, Yang J, Zhang B T, Sun J M, Yuan Y J, Fu H, Lai Y K. Mesh-based Gaussian splatting for real-time large-scale deformation. 2024, arXiv preprint arXiv: 2402.04796

[45]

Gao X, Li X, Zhuang Y, Zhang Q, Hu W, Zhang C, Yao Y, Shan Y, Quan L. Mani-GS: Gaussian splatting manipulation with triangular mesh. 2024, arXiv preprint arXiv: 2405.17811

[46]

Huang Y H, Sun Y T, Yang Z, Lyu X, Cao Y P, Qi X. SC-GS: sparse-controlled Gaussian splatting for editable dynamic scenes. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 4220−4230

[47]

Dong S, Ding L, Huang Z, Wang Z, Xue T, Xu D. Interactive3D: create what you want by interactive 3D generation. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024

[48]

Ling H, Kim S W, Torralba A, Fidler S, Kreis K. Align your Gaussians: text-to-4D with dynamic 3D Gaussians and composed diffusion models. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024

[49]

Ren J, Pan L, Tang J, Zhang C, Cao A, Zeng G, Liu Z. DreamGaussian4D: generative 4d Gaussian splatting. 2023, arXiv preprint arXiv: 2312.17142

[50]

Tschernezki V, Laina I, Larlus D, Vedaldi A. Neural feature fusion fields: 3D distillation of self-supervised 2D image representations. In: Proceedings of 2022 International Conference on 3D Vision. 2022, 443−453

[51]

Cen J, Zhou Z, Fang J, Yang C, Shen W, Xie L, Jiang D, Zhang X, Tian Q. Segment anything in 3D with NeRFs. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 25971−25990

[52]

Wu Q, Liu X, Chen Y, Li K, Zheng C, Cai J, Zheng J. Object-compositional neural implicit surfaces. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 197−213

[53]

Zhou S, Chang H, Jiang S, Fan Z, Zhu Z, Xu D, Chari P, You S, Wang Z, Kadambi A. Feature 3DGS: supercharging 3D Gaussian splatting to enable distilled feature fields. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 21676−21685

[54]

Cen J, Fang J, Yang C, Xie L, Zhang X, Shen W, Tian Q. Segment any 3D Gaussians. 2023, arXiv preprint arXiv: 2312.00860

[55]

Hu X, Wang Y, Fan L, Luo C, Fan J, Lei Z, Li Q, Peng J, Zhang Z. SAGD: boundary-enhanced segment anything in 3D Gaussian via Gaussian decomposition. 2024, arXiv preprint arXiv: 2401.17857

[56]

Ye M, Danelljan M, Yu F, Ke L. Gaussian grouping: segment and edit anything in 3D scenes. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 162−179

[57]

Chiang P Z, Tsai M S, Tseng H Y, Lai W S, Chiu W C. Stylizing 3D scene via implicit representation and HyperNetwork. In: Proceedings of 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. 2022, 1475−1484

[58]

Zhang Y, He Z, Xing J, Yao X, Jia J. Ref-NPR: reference-based non-photorealistic radiance fields for controllable scene stylization. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 4242−4251

[59]

Nguyen-Phuoc T, Liu F, Xiao L. SNeRF: stylized neural implicit representations for 3D scenes. ACM Transactions on Graphics (TOG), 2022, 41( 4): 142

[60]

Liu K, Zhan F, Chen Y, Zhang J, Yu Y, El Saddik A, Lu S, Xing E. StyleRF: zero-shot 3D style transfer of neural radiance fields. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pat tern Recognition. 2023, 8338−8348

[61]

Chen J, Xing W, Sun J, Chu T, Huang Y, Ji B, Zhao L, Lin H, Chen H, Wang Z. PNeSM: arbitrary 3D scene stylization via prompt-based neural style mapping. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 1091−1099

[62]

Liu K, Zhan F, Xu M, Theobalt C, Shao L, Lu S. StyleGaussian: instant 3D style transfer with Gaussian splatting. In: Proceedings of the SIGGRAPH Asia 2024 Technical Communications. 2024, 21

[63]

Wu Q, Tan J, Xu K. PaletteNeRF: palette-based color editing for NeRFs. Communications in Information and Systems, 2023, 23( 4): 447–475

[64]

Chen Y, Yuan Q, Li Z, Liu Y, Wang W, Xie C, Wen X, Yu Q. UPST-NeRF: universal photorealistic style transfer of neural radiance fields for 3D scene. IEEE Transactions on Visualization and Computer Graphics, 2025, 31( 4): 2045–2057

[65]

Zhang Z, Liu Y, Han C, Pan Y, Guo T, Yao T. Transforming radiance field with lipschitz network for photorealistic 3D scene stylization. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 20712−20721

[66]

Barron J T, Mildenhall B, Tancik M, Hedman P, Martin-Brualla R, Srinivasan P P. Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 5855−5864

[67]

Fridovich-Keil S, Yu A, Tancik M, Chen Q, Recht B, Kanazawa A. Plenoxels: radiance fields without neural networks. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 5501−5510

[68]

Tang Y, Zhang J, Yu Z, Wang H, Xu K. MIPS-Fusion: multi-implicit-submaps for scalable and robust online neural RGB-D reconstruction. ACM Transactions on Graphics (TOG), 2023, 42( 6): 246

[69]

Ye Y, Yi R, Gao Z, Zhu C, Cai Z, Xu K. NEF: neural edge fields for 3D parametric curve reconstruction from multi-view images. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 8486−8495

[70]

Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 2414−2423

[71]

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations. 2015

[72]

Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the 14th European Conference on Computer Vision. 2016, 694−711

[73]

Ulyanov D, Lebedev V, Vedaldi A, Lempitsky V. Texture networks: feed-forward synthesis of textures and stylized images. In: Proceedings of the 33rd International Conference on Machine Learning. 2016, 1349−1357

[74]

Li Y, Fang C, Yang J, Wang Z, Lu X, Yang M H. Diversified texture synthesis with feed-forward networks. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 3920−3928

[75]

Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of 2017 IEEE International Conference on Computer Vision. 2017, 1501−1510

[76]

Li X, Liu S, Kautz J, Yang M H. Learning linear transformations for fast arbitrary style transfer. 2018, arXiv preprint arXiv: 1808.04537

[77]

Luan F, Paris S, Shechtman E, Bala K. Deep photo style transfer. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 4990−4998

[78]

Mechrez R, Shechtman E, Zelnik-Manor L. Photorealistic style transfer with screened Poisson equation. In: Proceedings of the British Machine Vision Conference 2017. 2017

[79]

Li Y, Liu M Y, Li X, Yang M H, Kautz J. A closed-form solution to photorealistic image stylization. In: Proceedings of the 15th European Conference on Computer Vision. 2018, 453−468

[80]

Yoo J, Uh Y, Chun S, Kang B, Ha J W. Photorealistic style transfer via wavelet transforms. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 9036−9045

[81]

Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2014, 2672−2680

[82]

Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 574

[83]

Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 10684−10695

[84]

Zhang L, Rao A, Agrawala M. Adding conditional control to text-to-image diffusion models. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 3836−3847

[85]

Hu E J, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[86]

Brooks T, Holynski A, Efros A A. InstructPix2Pix: learning to follow image editing instructions. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 18392−18402

[87]

Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D M, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159

[88]

Hertz A, Mokady R, Tenenbaum J, Aberman K, Pritch Y, Cohen-Or D. Prompt-to-prompt image editing with cross attention control. 2022, arXiv preprint arXiv: 2208.01626

[89]

Poole B, Jain A, Barron J T, Mildenhall B. DreamFusion: text-to-3D using 2D diffusion. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[90]

Wang Z, Lu C, Wang Y, Bao F, Li C, Su H, Zhu J. ProlificDreamer: high-fidelity and diverse text-to-3D generation with variational score distillation. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 368

[91]

Hertz A, Aberman K, Cohen-Or D. Delta denoising score. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 2328−2337

[92]

Koo J, Park C, Sung M. Posterior distillation sampling. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 13352−13361

[93]

Wang Q, Wang Z, Genova K, Srinivasan P P, Zhou H, Barron J T, Martin-Brualla R, Snavely N, Funkhouser T. IBRNet: learning multi-view image-based rendering. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 4690−4699

[94]

Knapitsch A, Park J, Zhou Q Y, Koltun V. Tanks and temples: benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG), 2017, 36( 4): 78

[95]

Jensen R, Dahl A, Vogiatzis G, Tola E, Aanæs H. Large scale multi-view stereopsis evaluation. In: Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2014, 406−413

[96]

Barron J T, Mildenhall B, Verbin D, Srinivasan P P, Hedman P. Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 5470−5479

[97]

Sun C, Sun M, Chen H T. Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 5459−5469

[98]

Scaman K, Virmaux A. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 3839−3848

[99]

Fridovich-Keil S, Meanti G, Warburg F R, Recht B, Kanazawa A. K-planes: explicit radiance fields in space, time, and appearance. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 12479−12488

[100]

Zhang K, Riegler G, Snavely N, Koltun V. NeRF++: analyzing and improving neural radiance fields. 2020, arXiv preprint arXiv: 2010.07492

[101]

Ha D, Dai A, Le Q V. HyperNetworks. 2016, arXiv preprint arXiv: 1609.09106

[102]

Huang Y H, He Y, Yuan Y J, Lai Y K, Gao L. StylizedNeRF: consistent 3D scene stylization as stylized NeRF via 2D-3D mutual learning. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 18342−18352

[103]

Zhang K, Kolkin N, Bi S, Luan F, Xu Z, Shechtman E, Snavely N. ARF: artistic radiance fields. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 717−733

[104]

Fan Z, Jiang Y, Wang P, Gong X, Xu D, Wang Z. Unified implicit neural stylization. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 636−654

[105]

Sitzmann V, Martel J N P, Bergman A W, Lindell D B, Wetzstein G. Implicit neural representations with periodic activation functions. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 626

[106]

Chen A, Xu Z, Geiger A, Yu J, Su H. TensoRF: tensorial radiance fields. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 333−350

[107]

Radl L, Steiner M, Kurz A, Steinberger M. LAENeRF: local appearance editing for neural radiance fields. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 4969−4978

[108]

Müller T, Evans A, Schied C, Keller A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (TOG), 2022, 41( 4): 102

[109]

Liu S, Zhang X, Zhang Z, Zhang R, Zhu J Y, Russell B. Editing conditional radiance fields. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 5773−5783

[110]

Gong B, Wang Y, Han X, Dou Q. RecolorNeRF: layer decomposed radiance fields for efficient color editing of 3D scenes. In: Proceedings of the 31st ACM International Conference on Multimedia. 2023, 8004−8015

[111]

Kuang Z, Luan F, Bi S, Shu Z, Wetzstein G, Sunkavalli K. PaletteNeRF: palette-based appearance editing of neural radiance fields. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 20691−20700

[112]

Saroha A, Gladkova M, Curreli C, Muhle D, Yenamandra T, Cremers D. Gaussian splatting in style. 2024, arXiv preprint arXiv: 2403.08498

[113]

Yu X Y, Yu J X, Zhou L B, Wei Y, Ou L L. InstantStyleGaussian: efficient art style transfer with 3D Gaussian splatting. 2024, arXiv preprint arXiv: 2408.04249

[114]

Zhi S, Laidlow T, Leutenegger S, Davison A J. In-place scene labelling and understanding with implicit scene representation. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 15838−15847

[115]

Li B, Weinberger K Q, Belongie S J, Koltun V, Ranftl R. Language-driven semantic segmentation. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[116]

Caron M, Touvron H, Misra I, Jegou H, Mairal J, Bojanowski P, Joulin A. Emerging properties in self-supervised vision transformers. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 9650−9660

[117]

Kerr J, Kim C M, Goldberg K, Kanazawa A, Tancik M. LERF: language embedded radiance fields. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 19729−19739

[118]

Goel R, Sirikonda D, Saini S, Narayanan P J. Interactive segmentation of radiance fields. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 4201−4211

[119]

Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg A C, Lo W Y, Dollár P, Girshick R. Segment anything. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 4015−4026

[120]

Lüddecke T, Ecker A. Image segmentation using text and image prompts. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 7086−7096

[121]

Liu R, Xiang J, Zhao B, Zhang R, Yu J, Zheng C. Neural impostor: editing neural radiance fields with explicit shape manipulation. Computer Graphics Forum, 2023, 42( 7): e14981

[122]

Wang C, He M, Chai M, Chen D, Liao J. Mesh-guided neural implicit field editing. 2023, arXiv preprint arXiv: 2312.02157

[123]

Peng Y, Yan Y, Liu S, Cheng Y, Guan S, Pan B, Zhai G, Yang X. CageNeRF: cage-based neural radiance field for generalized 3D deformation and animation. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2277

[124]

Yang B, Bao C, Zeng J, Bao H, Zhang Y, Cui Z, Zhang G. NeuMesh: Learning disentangled neural mesh-based implicit field for geometry and texture editing. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 597−614

[125]

Zhou K, Hong L, Xie E, Yang Y, Li Z, Zhang W. SERF: fine-grained interactive 3D segmentation and editing with radiance fields. 2023, arXiv preprint arXiv: 2312.15856

[126]

Guédon A, Lepetit V. Gaussian frosting: editable complex radiance fields with real-time rendering. 2024, arXiv preprint arXiv: 2403.14554

[127]

Kazhdan M, Bolitho M, Hoppe H. Poisson surface reconstruction. In: Proceedings of the 4th Eurographics Symposium on Geometry Processing. 2006

[128]

Tang J, Ren J, Zhou H, Liu Z, Zeng G. DreamGaussian: generative Gaussian splatting for efficient 3D content creation. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[129]

Xie T, Aigerman N, Belilovsky E, Popa T. Sketch-guided cage-based 3D Gaussian splatting deformation. 2024, arXiv preprint arXiv: 2411.12168

[130]

Cao C, Fu Y. Learning a sketch tensor space for image inpainting of man-made scenes. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 14509−14518

[131]

Suvorov R, Logacheva E, Mashikhin A, Remizova A, Ashukha A, Silvestrov A, Kong N, Goka H, Park K, Lempitsky V. Resolution-robust large mask inpainting with fourier convolutions. In: Proceedings of 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. 2022, 2149−2159

[132]

Liu H K, Shen I C, Chen B Y. NeRF-in: free-form NeRF inpainting with RGB-D priors. 2022, arXiv preprint arXiv: 2206.04901

[133]

Weder S, Garcia-Hernando G, Monszpart Á Pollefeys M, Brostow G, Firman M, Vicente S. Removing objects from neural radiance fields. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 16528−16538

[134]

Mirzaei A, Aumentado-Armstrong T, Brubaker M A, Kelly J, Levinshtein A, Derpanis K G, Gilitschenski I. Reference-guided controllable inpainting of neural radiance fields. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 17815−17825

[135]

Chen H, Loy C C, Pan X. MVIP-NeRF: multi-view 3D inpainting on NeRF scenes via diffusion prior. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 5344−5353

[136]

Huang J, Yu H. Point’n move: interactive scene object manipulation on Gaussian splatting radiance fields. 2023, arXiv preprint arXiv: 2311.16737

[137]

Wang Y, Wu Q, Zhang G, Xu D. GScream: learning 3D geometry and feature consistent Gaussian splatting for object removal. 2024, arXiv preprint arXiv: 2404.13679

[138]

Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748−8763

[139]

Wang C, Chai M, He M, Chen D, Liao J. CLIP-NeRF: text-and-image driven manipulation of neural radiance fields. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 3835−3844

[140]

Wang C, Jiang R, Chai M, He M, Chen D, Liao J. NeRF-Art: text-driven neural radiance fields stylization. IEEE Transactions on Visualization and Computer Graphics, 2024, 30( 8): 4983–4996

[141]

Gordon O, Avrahami O, Lischinski D. Blended-NeRF: zero-shot object generation and blending in existing neural radiance fields. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 2941−2951

[142]

Song H, Choi S, Do H, Lee C, Kim T. Blending-NeRF: text-driven localized editing in neural radiance fields. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 14383−14393

[143]

Jain A, Mildenhall B, Barron J T, Abbeel P, PooleB. Zero-shot text-guided object generation with dream fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, 867–876

[144]

Tancik M, Weber E, Ng E, Li R, Yi B, Wang T, Kristoffersen A, Austin J, Salahi K, Ahuja A, Mcallister D, Kerr J. Nerfstudio: a modular framework for neural radiance field development. In: Proceedings of the ACM SIGGRAPH 2023 Conference. 2023, 72

[145]

Mirzaei A, Aumentado-Armstrong T, Brubaker M A, Kelly J, Levinshtein A, Derpanis K G, Gilitschenski I. Watch your steps: local image and scene editing by text instructions. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 111−129

[146]

Kamata H, Sakuma Y, Hayakawa A, Ishii M, Narihira T. Instruct 3D-to-3D: text instruction guided 3D-to-3D conversion. 2023, arXiv preprint arXiv: 2303.15780

[147]

Mikaeili A, Perel O, Safaee M, Cohen-Or D, Mahdavi-Amiri A. SKED: sketch-guided text-based 3D editing. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision. 2023, 14607−14619

[148]

Karnewar A, Ritschel T, Wang O, Mitra N. ReLU fields: the little non-linearity that could. In: Proceedings of the ACM SIGGRAPH 2022 Conference. 2022, 27

[149]

Liu X, Xu K, Huang Y, Yi R, Zhu C. MaskEditor: instruct 3D object editing with learned masks. In: Proceedings of the 7th Chinese Conference on Pattern Recognition and Computer Vision. 2025, 285−298

[150]

Wang P, Liu L, Liu Y, Theobalt C, Komura T, Wang W. NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 27171−27183

[151]

Ruiz N, Li Y, Jampani V, Pritch Y, Rubinstein M, Aberman K. DreamBooth: fine tuning text-to-image diffusion models for subject-driven generation. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 22500−22510

[152]

Munkberg J, Chen W, Hasselgren J, Evans A, Shen T, Müller T, Gao J, Fidler S. Extracting triangular 3D models, materials, and lighting from images. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 8280−8290

[153]

Zhou C, Loy C C, Dai B. Extract free dense labels from CLIP. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 696−712

[154]

Fu S, Hamilton M, Brandt L, Feldman A, Zhang Z, Freeman W T. FeatUp: a model-agnostic framework for features at any resolution. 2024, arXiv preprint arXiv: 2403.10516

[155]

Xu S, Huang Y, Pan J, Ma Z, Chai J. Inversion-free image editing with natural language. 2023, arXiv preprint arXiv: 2312.04965

[156]

Zhang Q, Xu Y, Wang C, Lee H Y, Wetzstein G, Zhou B, Yang C. 3DitScene: editing any scene via language-guided disentangled Gaussian splatting. 2024, arXiv preprint arXiv: 2405.18424

[157]

Mendiratta M, Pan X, Elgharib M, Teotia K, Mallikarjun B R, Tewari A, Golyanik V, Kortylewski A, Theobalt C. AvatarStudio: text-driven editing of 3D dynamic human head avatars. ACM Transactions on Graphics (ToG), 2023, 42(6): 226

[158]

Ouyang H, Wang Q, Xiao Y, Bai Q, Zhang J, Zheng K, Zhou X, Chen Q, Shen Y. CoDeF: content deformation fields for temporally consistent video processing. In: Proceedings of 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 8089−8099

[159]

Pavlakos G, Choutas V, Ghorbani N, Bolkart T, Osman A A, Tzionas D, Black M J. Expressive body capture: 3D hands, face, and body from a single image. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 10975−10985

RIGHTS & PERMISSIONS

The Author(s) 2025. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (8185KB)

Supplementary files

Highlights

1174

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/