"teacher-student models" Papers
8 papers found
Conference
DistillHGNN: A Knowledge Distillation Approach for High-Speed Hypergraph Neural Networks
Saman Forouzandeh, Parham Moradi Dowlatabadi, Mahdi Jalili
ICLR 2025
1
citations
Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
Seonghoon Yu, Dongjun Nam, Dina Katabi et al.
NEURIPS 2025arXiv:2510.22480
The Dynamic Duo of Collaborative Masking and Target for Advanced Masked Autoencoder Learning
Shentong Mo
AAAI 2025paperarXiv:2412.17566
1
citations
Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap
Junhao Dong, Piotr Koniusz, Junxi Chen et al.
ECCV 2024
10
citations
Dual-Perspective Knowledge Enrichment for Semi-supervised 3D Object Detection
Yucheng Han, Na Zhao, Weiling Chen et al.
AAAI 2024paperarXiv:2401.05011
6
citations
Markov Knowledge Distillation: Make Nasty Teachers trained by Self-undermining Knowledge Distillation Fully Distillable
En-Hui Yang, Linfeng Ye
ECCV 2024
8
citations
Revisit the Essence of Distilling Knowledge through Calibration
Wen-Shu Fan, Su Lu, Xin-Chun Li et al.
ICML 2024
Semi-supervised Segmentation of Histopathology Images with Noise-Aware Topological Consistency
Meilong Xu, Xiaoling Hu, Saumya Gupta et al.
ECCV 2024arXiv:2311.16447
13
citations