"generative modeling" Papers

97 papers found • Page 1 of 2

A Black-Box Debiasing Framework for Conditional Sampling

Han Cui, Jingbo Liu

NEURIPS 2025arXiv:2510.11071

Adaptive Non-Uniform Timestep Sampling for Accelerating Diffusion Model Training

Myunsoo Kim, Donghyeon Ki, Seong-Woong Shim et al.

CVPR 2025arXiv:2411.09998
5
citations

Aether: Geometric-Aware Unified World Modeling

Haoyi Zhu, Yifan Wang, Jianjun Zhou et al.

ICCV 2025arXiv:2503.18945
50
citations

Ambient Diffusion Omni: Training Good Models with Bad Data

Giannis Daras, Adrian Rodriguez-Munoz, Adam Klivans et al.

NEURIPS 2025spotlightarXiv:2506.10038
13
citations

A solvable model of learning generative diffusion: theory and insights

Hugo Cui, Cengiz Pehlevan, Yue Lu

NEURIPS 2025arXiv:2501.03937
5
citations

Assessing the quality of denoising diffusion models in Wasserstein distance: noisy score and optimal bounds

Vahan Arsenyan, Elen Vardanyan, Arnak Dalalyan

NEURIPS 2025arXiv:2506.09681

CDFlow: Building Invertible Layers with Circulant and Diagonal Matrices

XUCHEN FENG, Siyu Liao

NEURIPS 2025arXiv:2510.25323

Composition and Alignment of Diffusion Models using Constrained Learning

Shervin Khalafi, Ignacio Hounie, Dongsheng Ding et al.

NEURIPS 2025arXiv:2508.19104
2
citations

Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering

Klaus-Rudolf Kladny, Bernhard Schölkopf, Michael Muehlebach

ICLR 2025arXiv:2410.01660
5
citations

Constrained Generative Modeling with Manually Bridged Diffusion Models

Saeid Naderiparizi, Xiaoxuan Liang, Berend Zwartsenberg et al.

AAAI 2025paperarXiv:2502.20371
6
citations

Contextual Thompson Sampling via Generation of Missing Data

Kelly W Zhang, Tianhui Cai, Hongseok Namkoong et al.

NEURIPS 2025arXiv:2502.07064
2
citations

Continuous Diffusion for Mixed-Type Tabular Data

Markus Mueller, Kathrin Gruber, Dennis Fok

ICLR 2025arXiv:2312.10431
10
citations

Cross-fluctuation phase transitions reveal sampling dynamics in diffusion models

Sai Niranjan Ramachandran, Manish Krishan Lal, Suvrit Sra

NEURIPS 2025arXiv:2511.00124
1
citations

Deep MMD Gradient Flow without adversarial training

Alexandre Galashov, Valentin De Bortoli, Arthur Gretton

ICLR 2025arXiv:2405.06780
10
citations

Denoising with a Joint-Embedding Predictive Architecture

Chen Dengsheng, Jie Hu, Xiaoming Wei et al.

ICLR 2025arXiv:2410.03755
5
citations

Diffusion Bridge AutoEncoders for Unsupervised Representation Learning

Yeongmin Kim, Kwanghyeon Lee, Minsang Park et al.

ICLR 2025arXiv:2405.17111
6
citations

Diffusion Models and Gaussian Flow Matching: Two Sides of the Same Coin

Ruiqi Gao, Emiel Hoogeboom, Jonathan Heek et al.

ICLR 2025

DistillDrive: End-to-End Multi-Mode Autonomous Driving Distillation by Isomorphic Hetero-Source Planning Model

Rui Yu, Xianghang Zhang, Runkai Zhao et al.

ICCV 2025arXiv:2508.05402
4
citations

Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax

Ivan Butakov, Alexander Semenenko, Alexander Tolmachev et al.

ICLR 2025arXiv:2410.06993
3
citations

Energy Matching: Unifying Flow Matching and Energy-Based Models for Generative Modeling

Michal Balcerak, Tamaz Amiranashvili, Antonio Terpin et al.

NEURIPS 2025arXiv:2504.10612
12
citations

Energy-Weighted Flow Matching for Offline Reinforcement Learning

Shiyuan Zhang, Weitong Zhang, Quanquan Gu

ICLR 2025arXiv:2503.04975
29
citations

Exponential Convergence Guarantees for Iterative Markovian Fitting

Marta Gentiloni Silveri, Giovanni Conforti, Alain Durmus

NEURIPS 2025arXiv:2510.20871

Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms

Yinuo Ren, Haoxuan Chen, Yuchen Zhu et al.

NEURIPS 2025arXiv:2502.00234
32
citations

FlowDAS: A Stochastic Interpolant-based Framework for Data Assimilation

Siyi Chen, Yixuan Jia, Qing Qu et al.

NEURIPS 2025arXiv:2501.16642
5
citations

Flow matching achieves almost minimax optimal convergence

Kenji Fukumizu, Taiji Suzuki, Noboru Isobe et al.

ICLR 2025arXiv:2405.20879
13
citations

Flow to the Mode: Mode-Seeking Diffusion Autoencoders for State-of-the-Art Image Tokenization

Kyle Sargent, Kyle Hsu, Justin Johnson et al.

ICCV 2025arXiv:2503.11056
27
citations

FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks

Luca Della Libera, Francesco Paissan, Cem Subakan et al.

NEURIPS 2025arXiv:2502.04465
13
citations

Follow the Energy, Find the Path: Riemannian Metrics from Energy-Based Models

Louis Bethune, David Vigouroux, Yilun Du et al.

NEURIPS 2025arXiv:2505.18230
2
citations

Generating Physical Dynamics under Priors

Zihan Zhou, Xiaoxue Wang, Tianshu Yu

ICLR 2025arXiv:2409.00730
5
citations

Generator Matching: Generative modeling with arbitrary Markov processes

Peter Holderrieth, Marton Havasi, Jason Yim et al.

ICLR 2025arXiv:2410.20587
46
citations

Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise

Ryan Burgert, Yuancheng Xu, Wenqi Xian et al.

CVPR 2025arXiv:2501.08331
62
citations

High-Order Flow Matching: Unified Framework and Sharp Statistical Rates

Maojiang Su, Jerry Yao-Chieh Hu, Yi-Chen Lee et al.

NEURIPS 2025

Improving Neural Optimal Transport via Displacement Interpolation

Jaemoo Choi, Yongxin Chen, Jaewoong Choi

ICLR 2025arXiv:2410.03783
3
citations

Improving Rectified Flow with Boundary Conditions

Xixi Hu, Runlong Liao, Bo Liu et al.

ICCV 2025arXiv:2506.15864
2
citations

Informed Correctors for Discrete Diffusion Models

Yixiu Zhao, Jiaxin Shi, Feng Chen et al.

NEURIPS 2025arXiv:2407.21243
37
citations

Integrating Protein Dynamics into Structure-Based Drug Design via Full-Atom Stochastic Flows

Xiangxin Zhou, Yi Xiao, Haowei Lin et al.

ICLR 2025arXiv:2503.03989
2
citations

ItDPDM: Information-Theoretic Discrete Poisson Diffusion Model

Sagnik Bhattacharya, Abhiram Gorle, Ahsan Bilal et al.

NEURIPS 2025arXiv:2505.05082
1
citations

LaGeM: A Large Geometry Model for 3D Representation Learning and Diffusion

Biao Zhang, Peter Wonka

ICLR 2025arXiv:2410.01295
11
citations

Latent Zoning Network: A Unified Principle for Generative Modeling, Representation Learning, and Classification

Zinan Lin, Enshu Liu, Xuefei Ning et al.

NEURIPS 2025arXiv:2509.15591

MET3R: Measuring Multi-View Consistency in Generated Images

Mohammad Asim, Christopher Wewer, Thomas Wimmer et al.

CVPR 2025arXiv:2501.06336
44
citations

MOFFlow: Flow Matching for Structure Prediction of Metal-Organic Frameworks

Nayoung Kim, Seongsu Kim, Minsu Kim et al.

ICLR 2025arXiv:2410.17270
5
citations

Moment- and Power-Spectrum-Based Gaussianity Regularization for Text-to-Image Models

Jisung Hwang, Jaihoon Kim, Minhyuk Sung

NEURIPS 2025arXiv:2509.07027
1
citations

Multi-Modal and Multi-Attribute Generation of Single Cells with CFGen

Alessandro Palma, Till Richter, Hanyi Zhang et al.

ICLR 2025arXiv:2407.11734
9
citations

On the Feature Learning in Diffusion Models

Andi Han, Wei Huang, Yuan Cao et al.

ICLR 2025arXiv:2412.01021
14
citations

Physics-Informed Diffusion Models

Jan-Hendrik Bastek, WaiChing Sun, Dennis Kochmann

ICLR 2025arXiv:2403.14404
57
citations

Principled Long-Tailed Generative Modeling via Diffusion Models

Pranoy Das, Kexin Fu, Abolfazl Hashemi et al.

NEURIPS 2025oral

Progressive Compression with Universally Quantized Diffusion Models

Yibo Yang, Justus Will, Stephan Mandt

ICLR 2025arXiv:2412.10935
5
citations

Proper Hölder-Kullback Dirichlet Diffusion: A Framework for High Dimensional Generative Modeling

Wanpeng Zhang, Yuhao Fang, Xihang Qiu et al.

NEURIPS 2025

REGEN: Learning Compact Video Embedding with (Re-)Generative Decoder

Yitian Zhang, Long Mai, Aniruddha Mahapatra et al.

ICCV 2025arXiv:2503.08665
1
citations

Riemannian Flow Matching for Brain Connectivity Matrices via Pullback Geometry

Antoine Collas, Ce Ju, Nicolas Salvy et al.

NEURIPS 2025arXiv:2505.18193
4
citations
PreviousNext