"neural radiance fields" Papers

189 papers found • Page 2 of 4

NeuralSVG: An Implicit Representation for Text-to-Vector Generation

Sagi Polaczek, Yuval Alaluf, Elad Richardson et al.

ICCV 2025arXiv:2501.03992
9
citations

NeuroRenderedFake: A Challenging Benchmark to Detect Fake Images Generated by Advanced Neural Rendering Methods

Chengdong Dong, B. V. K. Vijaya Kumar, Zhenyu Zhou et al.

NEURIPS 2025

Optimize the Unseen - Fast NeRF Cleanup with Free Space Prior

Leo Segre, Shai Avidan

NEURIPS 2025arXiv:2412.12772
1
citations

PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields

Sean Wu, Shamik Basu, Tim Broedermann et al.

CVPR 2025arXiv:2412.09680
7
citations

Perspective-aware 3D Gaussian Inpainting with Multi-view Consistency

Yuxin CHENG, Binxiao Huang, Taiqiang Wu et al.

ICCV 2025arXiv:2510.10993
1
citations

Pippo: High-Resolution Multi-View Humans from a Single Image

Yash Kant, Ethan Weber, Jin Kyu Kim et al.

CVPR 2025highlightarXiv:2502.07785
14
citations

Pragmatist: Multiview Conditional Diffusion Models for High-Fidelity 3D Reconstruction from Unposed Sparse Views

Songchun Zhang, Chunhui Zhao

AAAI 2025paperarXiv:2412.08412

Radiance Fields in XR: A Survey on How Radiance Fields are Envisioned and Addressed for XR Research

Ke Li, Mana Masuda, Susanne Schmidt et al.

ISMAR 2025paperarXiv:2508.04326
2
citations

Reflective Gaussian Splatting

Yuxuan Yao, Zixuan Zeng, Chun Gu et al.

ICLR 2025arXiv:2412.19282
21
citations

Retri3D: 3D Neural Graphics Representation Retrieval

Yushi Guan, Daniel Kwan, Jean Dandurand et al.

ICLR 2025

RGB-Only Supervised Camera Parameter Optimization in Dynamic Scenes

Fang Li, Hao Zhang, Narendra Ahuja

NEURIPS 2025spotlightarXiv:2509.15123

RNG: Relightable Neural Gaussians

Jiahui Fan, Fujun Luan, Jian Yang et al.

CVPR 2025arXiv:2409.19702
9
citations

ROGR: Relightable 3D Objects using Generative Relighting

Jiapeng Tang, Matthew Levine, Dor Verbin et al.

NEURIPS 2025spotlightarXiv:2510.03163
2
citations

Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views

Jiang Wu, Rui Li, Yu Zhu et al.

CVPR 2025arXiv:2504.20378
8
citations

Spatial Annealing for Efficient Few-shot Neural Rendering

Yuru Xiao, Deming Zhai, Wenbo Zhao et al.

AAAI 2025paperarXiv:2406.07828
3
citations

Spatially-aware Weights Tokenization for NeRF-Language Models

Andrea Amaduzzi, Pierluigi Zama Ramirez, Giuseppe Lisanti et al.

NEURIPS 2025

Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields

Gang Wan, Qinlong Lan, Zihan Li et al.

NEURIPS 2025

SVG-IR: Spatially-Varying Gaussian Splatting for Inverse Rendering

Hanxiao Sun, Yupeng Gao, Jin Xie et al.

CVPR 2025arXiv:2504.06815
4
citations

ThermalGaussian: Thermal 3D Gaussian Splatting

Rongfeng Lu, Hangyu Chen, Zunjie Zhu et al.

ICLR 2025arXiv:2409.07200
10
citations

Towards Realistic Example-based Modeling via 3D Gaussian Stitching

Xinyu Gao, Ziyi Yang, Bingchen Gong et al.

CVPR 2025arXiv:2408.15708
6
citations

Uncertainty modeling for fine-tuned implicit functions

Anna Susmelj, Mael Macuglia, Natasa Tagasovska et al.

ICLR 2025arXiv:2406.12082
2
citations

Universal Few-shot Spatial Control for Diffusion Models

Kiet Nguyen, Chanhyuk Lee, Donggyun Kim et al.

NEURIPS 2025arXiv:2509.07530

UrbanCAD: Towards Highly Controllable and Photorealistic 3D Vehicles for Urban Scene Simulation

Yichong Lu, Yichi Cai, Shangzhan Zhang et al.

CVPR 2025arXiv:2411.19292
3
citations

USP-Gaussian: Unifying Spike-based Image Reconstruction, Pose Correction and Gaussian Splatting

Kang Chen, Jiyuan Zhang, Zecheng Hao et al.

CVPR 2025highlightarXiv:2411.10504
4
citations

VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step

Hanyang Wang, Fangfu Liu, Jiawei Chi et al.

CVPR 2025highlightarXiv:2504.01956
11
citations

ViPOcc: Leveraging Visual Priors from Vision Foundation Models for Single-View 3D Occupancy Prediction

Yi Feng, Yu Han, Xijing Zhang et al.

AAAI 2025paperarXiv:2412.11210
7
citations

Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting

Jiaxin Huang, Sheng Miao, Bangbang Yang et al.

ICCV 2025arXiv:2504.11092
3
citations

ZIM: Zero-Shot Image Matting for Anything

Beomyoung Kim, Chanyong Shin, Joonhyun Jeong et al.

ICCV 2025highlightarXiv:2411.00626
8
citations

3D Geometry-Aware Deformable Gaussian Splatting for Dynamic View Synthesis

Zhicheng Lu, xiang guo, Le Hui et al.

CVPR 2024arXiv:2404.06270
102
citations

AE-NeRF: Audio Enhanced Neural Radiance Field for Few Shot Talking Head Synthesis

Dongze Li, Kang Zhao, Wei Wang et al.

AAAI 2024paperarXiv:2312.10921
23
citations

Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery

Yuqi Zhang, Guanying Chen, Jiaxing Chen et al.

CVPR 2024arXiv:2403.11812
5
citations

AltNeRF: Learning Robust Neural Radiance Field via Alternating Depth-Pose Optimization

Kun Wang, Zhiqiang Yan, Huang Tian et al.

AAAI 2024paperarXiv:2308.10001
5
citations

A Probability-guided Sampler for Neural Implicit Surface Rendering

Gonçalo José Dias Pais, Valter André Piedade, Moitreya Chatterjee et al.

ECCV 2024arXiv:2506.08619
1
citations

BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D Scene Generation

Qihang Zhang, Yinghao Xu, Yujun Shen et al.

CVPR 2024arXiv:2312.02136
5
citations

BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling

Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham et al.

AAAI 2024paperarXiv:2302.13543
8
citations

Boost Your NeRF: A Model-Agnostic Mixture of Experts Framework for High Quality and Efficient Rendering

Francesco Di Sario, Riccardo Renzulli, Marco Grangetto et al.

ECCV 2024arXiv:2407.10389
5
citations

CARFF: Conditional Auto-encoded Radiance Field for 3D Scene Forecasting

Jiezhi Yang, Khushi P Desai, Charles Packer et al.

ECCV 2024arXiv:2401.18075
2
citations

CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning

Qingsong Yan, Qiang Wang, Kaiyong Zhao et al.

AAAI 2024paperarXiv:2312.08760
12
citations

CG-SLAM: Efficient Dense RGB-D SLAM in a Consistent Uncertainty-aware 3D Gaussian Field

Jiarui Hu, Xianhao Chen, Boyin Feng et al.

ECCV 2024arXiv:2403.16095
80
citations

City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web

Kaiwen Song, Xiaoyi Zeng, Chenqu Ren et al.

ECCV 2024arXiv:2312.16457
17
citations

CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians

Avinash Paliwal, Wei Ye, Jinhui Xiong et al.

ECCV 2024arXiv:2403.19495
62
citations

ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance Field

Zhangkai Ni, Peiqi Yang, Wenhan Yang et al.

AAAI 2024paperarXiv:2312.09095
15
citations

Colorizing Monochromatic Radiance Fields

Yean Cheng, Renjie Wan, Shuchen Weng et al.

AAAI 2024paperarXiv:2402.12184
8
citations

DATENeRF: Depth-Aware Text-based Editing of NeRFs

Sara Rojas Martinez, Julien Philip, Kai Zhang et al.

ECCV 2024arXiv:2404.04526
5
citations

Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions

Weng Fei Low, Gim Hee Lee

ECCV 2024arXiv:2409.17988
11
citations

Deblurring 3D Gaussian Splatting

Byeonghyeon Lee, Howoong Lee, Xiangyu Sun et al.

ECCV 2024arXiv:2401.00834
79
citations

DecentNeRFs: Decentralized Neural Radiance Fields from Crowdsourced Images

Zaid Tasneem, Akshat Dave, Abhishek Singh et al.

ECCV 2024arXiv:2403.13199
4
citations

Depth-guided NeRF Training via Earth Mover’s Distance

Anita Rau, Josiah Aklilu, Floyd C Holsinger et al.

ECCV 2024arXiv:2403.13206
2
citations

Disentangled 3D Scene Generation with Layout Learning

Dave Epstein, Ben Poole, Ben Mildenhall et al.

ICML 2024arXiv:2402.16936
31
citations

Distractor-Free Novel View Synthesis via Exploiting Memorization Effect in Optimization

Yukun Wang, Kunhong Li, Minglin Chen et al.

ECCV 2024
1
citations