Oral "knowledge distillation" Papers
4 papers found
Conference
KINDLE: Knowledge-Guided Distillation for Prior-Free Gene Regulatory Network Inference
Rui Peng, Yuchen Lu, Qichen Sun et al.
NEURIPS 2025oralarXiv:2505.09664
SLMRec: Distilling Large Language Models into Small for Sequential Recommendation
Wujiang Xu, Qitian Wu, Zujie Liang et al.
ICLR 2025oralarXiv:2405.17890
18
citations
Synergy Between the Strong and the Weak: Spiking Neural Networks are Inherently Self-Distillers
Yongqi Ding, Lin Zuo, Mengmeng Jing et al.
NEURIPS 2025oralarXiv:2510.07924
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Makoto Shing, Kou Misaki, Han Bao et al.
ICLR 2025oralarXiv:2501.16937
13
citations