"dataset distillation" Papers

29 papers found

Adaptive Dataset Quantization

Muquan Li, Dongyang Zhang, Qiang Dong et al.

AAAI 2025paperarXiv:2412.16895
3
citations

Beyond Modality Collapse: Representation Blending for Multimodal Dataset Distillation

xin zhang, Ziruo Zhang, JIAWEI DU et al.

NEURIPS 2025arXiv:2505.14705
3
citations

Beyond Random: Automatic Inner-loop Optimization in Dataset Distillation

Muquan Li, Hang Gou, Dongyang Zhang et al.

NEURIPS 2025arXiv:2510.04838
1
citations

Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation

Sheng-Feng Yu, Jia-Jiun Yao, Wei-Chen Chiu

ICLR 2025arXiv:2507.21455
1
citations

Dataset Distillation for Pre-Trained Self-Supervised Vision Models

George Cazenavette, Antonio Torralba, Vincent Sitzmann

NEURIPS 2025arXiv:2511.16674
1
citations

Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-training of Deep Networks

Siddharth Joshi, Jiayi Ni, Baharan Mirzasoleiman

ICLR 2025arXiv:2410.02116
4
citations

Dataset Distillation via Vision-Language Category Prototype

YAWEN ZOU, Guang Li, Duo Su et al.

ICCV 2025highlightarXiv:2506.23580
3
citations

DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation

Zhiqiang Shen, Ammar Sherif, Zeyuan Yin et al.

CVPR 2025arXiv:2411.19946
11
citations

Distilling Dataset into Neural Field

Donghyeok Shin, HeeSun Bae, Gyuwon Sim et al.

ICLR 2025arXiv:2503.04835
4
citations

Does Training with Synthetic Data Truly Protect Privacy?

Yunpeng Zhao, Jie Zhang

ICLR 2025arXiv:2502.12976
8
citations

Efficient Multimodal Dataset Distillation via Generative Models

Zhenghao Zhao, Haoxuan Wang, Junyi Wu et al.

NEURIPS 2025arXiv:2509.15472
2
citations

Enhancing Dataset Distillation via Non-Critical Region Refinement

Minh-Tuan Tran, Trung Le, Xuan-May Le et al.

CVPR 2025arXiv:2503.18267
4
citations

Flowing Datasets with Wasserstein over Wasserstein Gradient Flows

Clément Bonet, Christophe Vauthier, Anna Korba

ICML 2025oralarXiv:2506.07534
7
citations

GIFT: Unlocking Full Potential of Labels in Distilled Dataset at Near-zero Cost

Xinyi Shang, Peng Sun, Tao Lin

ICLR 2025arXiv:2405.14736
9
citations

Group Distributionally Robust Dataset Distillation with Risk Minimization

Saeed Vahidian, Mingyu Wang, Jianyang Gu et al.

ICLR 2025arXiv:2402.04676
9
citations

Heavy Labels Out! Dataset Distillation with Label Space Lightening

Ruonan Yu, Songhua Liu, Zigeng Chen et al.

ICCV 2025arXiv:2408.08201
3
citations

Hierarchical Features Matter: A Deep Exploration of Progressive Parameterization Method for Dataset Distillation

Xinhao Zhong, Hao Fang, Bin Chen et al.

CVPR 2025arXiv:2406.05704
3
citations

Hyperbolic Dataset Distillation

Wenyuan Li, Guang Li, Keisuke Maeda et al.

NEURIPS 2025arXiv:2505.24623
7
citations

Influence-Guided Diffusion for Dataset Distillation

Mingyang Chen, Jiawei Du, Bo Huang et al.

ICLR 2025
19
citations

Towards Adversarially Robust Dataset Distillation by Curvature Regularization

Eric Xue, Yijiang Li, Haoyang Liu et al.

AAAI 2025paperarXiv:2403.10045
18
citations

Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory

Wenliang Zhong, Haoyu Tang, Qinghai Zheng et al.

CVPR 2025arXiv:2406.19827
8
citations

Data-to-Model Distillation: Data-Efficient Learning Framework

Ahmad Sajedi, Samir Khaki, Lucy Z. Liu et al.

ECCV 2024arXiv:2411.12841
4
citations

Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation

YUE XU, Yong-Lu Li, Kaitong Cui et al.

ECCV 2024arXiv:2305.18381
8
citations

Large Scale Dataset Distillation with Domain Shift

Noel Loo, Alaa Maalouf, Ramin Hasani et al.

ICML 2024

Low-Rank Similarity Mining for Multimodal Dataset Distillation

Yue Xu, Zhilin Lin, Yusong Qiu et al.

ICML 2024arXiv:2406.03793
11
citations

SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching

Yongmin Lee, Hye Won Chung

ICML 2024arXiv:2406.18561
18
citations

Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching

Ruonan Yu, Songhua Liu, Jingwen Ye et al.

ECCV 2024arXiv:2410.07579
13
citations

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

Yuqi Jia, Saeed Vahidian, Jingwei Sun et al.

ECCV 2024arXiv:2312.01537
18
citations

What is Dataset Distillation Learning?

William Yang, Ye Zhu, Zhiwei Deng et al.

ICML 2024arXiv:2406.04284
13
citations