"knowledge distillation" Papers
210 papers found • Page 4 of 5
Conference
Efficient Multitask Dense Predictor via Binarization
Yuzhang Shang, Dan Xu, Gaowen Liu et al.
Embodied CoT Distillation From LLM To Off-the-shelf Agents
Wonje Choi, Woo Kyung Kim, Minjong Yoo et al.
Enhanced Sparsification via Stimulative Training
Shengji Tang, Weihao Lin, Hancheng Ye et al.
Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation
Lan Li, Xin-Chun Li, Han-Jia Ye et al.
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
Dong Chen, Ning Liu, Yichen Zhu et al.
Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders
Bumsoo Kim, Jinhyung Kim, Yeonsik Jo et al.
Federated Learning with Extremely Noisy Clients via Negative Distillation
Yang Lu, Lin Chen, Yonggang Zhang et al.
Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning
Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.
From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble
Qianlong Wen, Mingxuan Ju, Zhongyu Ouyang et al.
Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching
Shitong Shao, Zeyuan Yin, Muxin Zhou et al.
Generative Model-Based Feature Knowledge Distillation for Action Recognition
Guiqin Wang, Peng Zhao, Yanjiang Shi et al.
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami, Moritz Böhle, Sukrut Rao et al.
GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
Gege Gao, Weiyang Liu, Anpei Chen et al.
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
yaomin huang, faming Fang, Zaoming Yan et al.
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning
Jiangmeng Li, Yifan Jin, Hang Gao et al.
Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework
Qiongjie Cui, Huaijiang Sun, Bin Li et al.
Improving Plasticity in Online Continual Learning via Collaborative Learning
Maorong Wang, Nicolas Michel, Ling Xiao et al.
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
Jacopo Bonato, Marco Cotogni, Luigi Sabetta
Keypoint-based Progressive Chain-of-Thought Distillation for LLMs
Kaituo Feng, Changsheng Li, Xiaolu Zhang et al.
Knowledge Distillation with Auxiliary Variable
Bo Peng, zhen fang, Guangquan Zhang et al.
Large Language Models are Efficient Learners of Noise-Robust Speech Recognition
Yuchen Hu, CHEN CHEN, Chao-Han Huck Yang et al.
LASS3D: Language-Assisted Semi-Supervised 3D Semantic Segmentation with Progressive Unreliable Data Exploitation
Jianan Li, Qiulei Dong
Learning Modality-agnostic Representation for Semantic Segmentation from Any Modalities
Xu Zheng, Yuanhuiyi Lyu, LIN WANG
LEROjD: Lidar Extended Radar-Only Object Detection
Patrick Palmer, Martin Krüger, Stefan Schütte et al.
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu, Haolun Wu, Xuan Li et al.
Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval
Zhe Ma, Jianfeng Dong, Shouling Ji et al.
LiDAR-based All-weather 3D Object Detection via Prompting and Distilling 4D Radar
Yujeong Chae, HYEONSEONG KIM, Changgyoon Oh et al.
Make a Strong Teacher with Label Assistance: A Novel Knowledge Distillation Approach for Semantic Segmentation
Shoumeng Qiu, Jie Chen, Xinrun Li et al.
Markov Knowledge Distillation: Make Nasty Teachers trained by Self-undermining Knowledge Distillation Fully Distillable
En-Hui Yang, Linfeng Ye
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis
Luyuan Xie, Manqing Lin, Tianyu Luan et al.
Mitigating Background Shift in Class-Incremental Semantic Segmentation
gilhan Park, WonJun Moon, SuBeen Lee et al.
MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices
Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.
Multi-scale Cross Distillation for Object Detection in Aerial Images
Kun Wang, Zi Wang, Zhang Li et al.
Online Speculative Decoding
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis et al.
On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving
Kaituo Feng, Changsheng Li, Dongchun Ren et al.
Open Vocabulary 3D Scene Understanding via Geometry Guided Self-Distillation
Pengfei Wang, Yuxi Wang, Shuai Li et al.
Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Chun-Yin Huang, Kartik Srinivas, Xin Zhang et al.
PartDistill: 3D Shape Part Segmentation by Vision-Language Model Distillation
Ardian Umam, Cheng-Kun Yang, Min-Hung Chen et al.
PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation
Jian Ma, Chen Chen, Qingsong Xie et al.
PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization
Zining Chen, Weiqiu Wang, Zhicheng Zhao et al.
Progressive Pretext Task Learning for Human Trajectory Prediction
Xiaotong Lin, Tianming Liang, Jian-Huang Lai et al.
Recurrent Early Exits for Federated Learning with Heterogeneous Clients
Royson Lee, Javier Fernandez-Marques, Xu Hu et al.
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Nicolas MICHEL, Maorong Wang, Ling Xiao et al.
Revisit the Essence of Distilling Knowledge through Calibration
Wen-Shu Fan, Su Lu, Xin-Chun Li et al.
SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection
JUNSU KIM, Hoseong Cho, Jihyeon Kim et al.
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models
Yu-Chu Yu, Chi-Pin Huang, Jr-Jen Chen et al.
Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities
Kaiwen Cai, ZheKai Duan, Gaowen Liu et al.
Self-Cooperation Knowledge Distillation for Novel Class Discovery
Yuzheng Wang, Zhaoyu Chen, Dingkang Yang et al.
SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection
Haimei Zhao, Qiming Zhang, Shanshan Zhao et al.
Simple Image-Level Classification Improves Open-Vocabulary Object Detection
Ruohuan Fang, Guansong Pang, Xiao Bai