"knowledge distillation" Papers

210 papers found • Page 1 of 5

Accessing Vision Foundation Models via ImageNet-1K

Yitian Zhang, Xu Ma, Yue Bai et al.

ICLR 2025arXiv:2407.10366
8
citations

Active Data Curation Effectively Distills Large-Scale Multimodal Models

Vishaal Udandarao, Nikhil Parthasarathy, Muhammad Ferjad Naeem et al.

CVPR 2025arXiv:2411.18674
15
citations

ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation

Ze Yang, Shichao Dong, Ruibo Li et al.

ICLR 2025

Advancing Multiple Instance Learning with Continual Learning for Whole Slide Imaging

Xianrui Li, Yufei Cui, Jun Li et al.

CVPR 2025highlightarXiv:2505.10649

Advantage-Guided Distillation for Preference Alignment in Small Language Models

Shiping Gao, Fanqi Wan, Jiajian Guo et al.

ICLR 2025arXiv:2502.17927
4
citations

Adversarial Reconstruction Feedback for Robust Fine-grained Generalization

Shijie Wang, Jian Shi, Haojie Li

ICCV 2025arXiv:2507.21742

A Simple yet Effective $\Delta\Delta G$ Predictor is An Unsupervised Antibody Optimizer and Explainer

Lirong Wu, Yunfan Liu, Haitao Lin et al.

ICLR 2025

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NEURIPS 2025arXiv:2502.05567
13
citations

AugKD: Ingenious Augmentations Empower Knowledge Distillation for Image Super-Resolution

Yun Zhang, Wei Li, Simiao Li et al.

ICLR 2025
3
citations

Better Estimation of the Kullback--Leibler Divergence Between Language Models

Afra Amini, Tim Vieira, Ryan Cotterell

NEURIPS 2025arXiv:2504.10637
4
citations

BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions

Wonyong Seo, Jihyong Oh, Munchurl Kim

CVPR 2025arXiv:2412.11365
4
citations

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

Jiangpeng He, Zhihao Duan, Fengqing Zhu

CVPR 2025arXiv:2505.24816
8
citations

Closed-Loop Transfer for Weakly-supervised Affordance Grounding

Jiajin Tang, Zhengxuan Wei, Ge Zheng et al.

ICCV 2025arXiv:2510.17384
2
citations

CoMBO: Conflict Mitigation via Branched Optimization for Class Incremental Segmentation

Kai Fang, Anqi Zhang, Guangyu Gao et al.

CVPR 2025arXiv:2504.04156
5
citations

Continuous Concepts Removal in Text-to-image Diffusion Models

Tingxu Han, Weisong Sun, Yanrong Hu et al.

NEURIPS 2025arXiv:2412.00580
3
citations

Cross-Lingual Text-Rich Visual Comprehension: An Information Theory Perspective

Xinmiao Yu, Xiaocheng Feng, Yun Li et al.

AAAI 2025paperarXiv:2412.17787

CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation

Jungsoo Lee, Debasmit Das, Munawar Hayat et al.

CVPR 2025arXiv:2503.18244
4
citations

Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-training of Deep Networks

Siddharth Joshi, Jiayi Ni, Baharan Mirzasoleiman

ICLR 2025arXiv:2410.02116
4
citations

DCA: Dividing and Conquering Amnesia in Incremental Object Detection

Aoting Zhang, Dongbao Yang, Chang Liu et al.

AAAI 2025paperarXiv:2503.15295
2
citations

Dense2MoE: Restructuring Diffusion Transformer to MoE for Efficient Text-to-Image Generation

Youwei Zheng, Yuxi Ren, Xin Xia et al.

ICCV 2025arXiv:2510.09094
5
citations

Distillation Robustifies Unlearning

Bruce W, Lee, Addie Foote, Alex Infanger et al.

NEURIPS 2025spotlightarXiv:2506.06278
6
citations

DistillDrive: End-to-End Multi-Mode Autonomous Driving Distillation by Isomorphic Hetero-Source Planning Model

Rui Yu, Xianghang Zhang, Runkai Zhao et al.

ICCV 2025arXiv:2508.05402
4
citations

Distilled Prompt Learning for Incomplete Multimodal Survival Prediction

Yingxue Xu, Fengtao ZHOU, Chenyu Zhao et al.

CVPR 2025arXiv:2503.01653
6
citations

DistillHGNN: A Knowledge Distillation Approach for High-Speed Hypergraph Neural Networks

Saman Forouzandeh, Parham Moradi Dowlatabadi, Mahdi Jalili

ICLR 2025
1
citations

Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation

Yanglin Huang, Kai Hu, Yuan Zhang et al.

AAAI 2025paperarXiv:2504.07691
1
citations

Distilling Monocular Foundation Model for Fine-grained Depth Completion

Yingping Liang, Yutao Hu, Wenqi Shao et al.

CVPR 2025arXiv:2503.16970
9
citations

Distilling Multi-modal Large Language Models for Autonomous Driving

Deepti Hegde, Rajeev Yasarla, Hong Cai et al.

CVPR 2025arXiv:2501.09757
29
citations

Distilling Spatially-Heterogeneous Distortion Perception for Blind Image Quality Assessment

Xudong Li, Wenjie Nie, Yan Zhang et al.

CVPR 2025
3
citations

DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture

Qianlong Xiang, Miao Zhang, Yuzhang Shang et al.

CVPR 2025arXiv:2409.03550
19
citations

DKDR: Dynamic Knowledge Distillation for Reliability in Federated Learning

Yueyang Yuan, Wenke Huang, Guancheng Wan et al.

NEURIPS 2025

EA-KD: Entropy-based Adaptive Knowledge Distillation

Chi-Ping Su, Ching-Hsun Tseng, Bin Pu et al.

ICCV 2025arXiv:2311.13621
3
citations

EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine Translation

Yuqiao Wen, Behzad Shayegh, Chenyang Huang et al.

AAAI 2025paperarXiv:2403.00144
8
citations

EdgeTAM: On-Device Track Anything Model

Chong Zhou, Chenchen Zhu, Yunyang Xiong et al.

CVPR 2025arXiv:2501.07256
9
citations

EditAR: Unified Conditional Generation with Autoregressive Models

Jiteng Mu, Nuno Vasconcelos, Xiaolong Wang

CVPR 2025arXiv:2501.04699
24
citations

Efficient ANN-Guided Distillation: Aligning Rate-based Features of Spiking Neural Networks through Hybrid Block-wise Replacement

Shu Yang, Chengting Yu, Lei Liu et al.

CVPR 2025arXiv:2503.16572
5
citations

Enhanced Expert Merging for Mixture-of-Experts in Graph Foundation Models

Lei Liu, Xingyu Xia, Qianqian Xie et al.

NEURIPS 2025

Every SAM Drop Counts: Embracing Semantic Priors for Multi-Modality Image Fusion and Beyond

Guanyao Wu, Haoyu Liu, Hongming Fu et al.

CVPR 2025arXiv:2503.01210
26
citations

Evidential Knowledge Distillation

Liangyu Xiang, Junyu Gao, Changsheng Xu

ICCV 2025arXiv:2507.18366
1
citations

Exploring Vacant Classes in Label-Skewed Federated Learning

Kuangpu Guo, Yuhe Ding, Jian Liang et al.

AAAI 2025paperarXiv:2401.02329
12
citations

Few-Shot Knowledge Distillation of LLMs With Counterfactual Explanations

Faisal Hamman, Pasan Dissanayake, Yanjun Fu et al.

NEURIPS 2025arXiv:2510.21631
1
citations

Fin3R: Fine-tuning Feed-forward 3D Reconstruction Models via Monocular Knowledge Distillation

Weining Ren, Hongjun Wang, Xiao Tan et al.

NEURIPS 2025arXiv:2511.22429

Frequency-Aligned Knowledge Distillation for Lightweight Spatiotemporal Forecasting

Yuqi Li, Chuanguang Yang, Hansheng Zeng et al.

ICCV 2025arXiv:2507.02939
38
citations

From Models to Microtheories: Distilling a Model's Topical Knowledge for Grounded Question-Answering

Nathaniel Weir, Bhavana Dalvi Mishra, Orion Weller et al.

ICLR 2025arXiv:2412.17701
3
citations

General Compression Framework for Efficient Transformer Object Tracking

Lingyi Hong, Jinglun Li, Xinyu Zhou et al.

ICCV 2025arXiv:2409.17564
3
citations

Graph-Based Cross-Domain Knowledge Distillation for Cross-Dataset Text-to-Image Person Retrieval

Bingjun Luo, Jinpeng Wang, Zewen Wang et al.

AAAI 2025paperarXiv:2501.15052
5
citations

Ground-V: Teaching VLMs to Ground Complex Instructions in Pixels

Yongshuo Zong, Qin ZHANG, DONGSHENG An et al.

CVPR 2025arXiv:2505.13788
3
citations

HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models

Seanie Lee, Haebin Seong, Dong Bok Lee et al.

ICLR 2025arXiv:2410.01524
15
citations

High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws

Muhammed Ildiz, Halil Gozeten, Ege Taga et al.

ICLR 2025arXiv:2410.18837
13
citations

High-dimension Prototype is a Better Incremental Object Detection Learner

Yanjie Wang, Liqun Chen, Tianming Zhao et al.

ICLR 2025

High Temporal Consistency through Semantic Similarity Propagation in Semi-Supervised Video Semantic Segmentation for Autonomous Flight

Cédric Vincent, Taehyoung Kim, Henri Meeß

CVPR 2025arXiv:2503.15676
3
citations
Previous
123...5
Next