Paper "knowledge distillation" Papers

36 papers found

Cross-Lingual Text-Rich Visual Comprehension: An Information Theory Perspective

Xinmiao Yu, Xiaocheng Feng, Yun Li et al.

AAAI 2025paperarXiv:2412.17787

DCA: Dividing and Conquering Amnesia in Incremental Object Detection

Aoting Zhang, Dongbao Yang, Chang Liu et al.

AAAI 2025paperarXiv:2503.15295
2
citations

Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation

Yanglin Huang, Kai Hu, Yuan Zhang et al.

AAAI 2025paperarXiv:2504.07691
1
citations

EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine Translation

Yuqiao Wen, Behzad Shayegh, Chenyang Huang et al.

AAAI 2025paperarXiv:2403.00144
8
citations

Exploring Vacant Classes in Label-Skewed Federated Learning

Kuangpu Guo, Yuhe Ding, Jian Liang et al.

AAAI 2025paperarXiv:2401.02329
12
citations

Graph-Based Cross-Domain Knowledge Distillation for Cross-Dataset Text-to-Image Person Retrieval

Bingjun Luo, Jinpeng Wang, Zewen Wang et al.

AAAI 2025paperarXiv:2501.15052
5
citations

Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval

Jiaxing Li, Lin Jiang, Zeqi Ma et al.

AAAI 2025paperarXiv:2502.19751
2
citations

Neural Collapse Inspired Knowledge Distillation

Shuoxi Zhang, Zijian Song, Kun He

AAAI 2025paperarXiv:2412.11788
1
citations

Pruning Large Language Models with Semi-Structural Adaptive Sparse Training

Weiyu Huang, Yuezhou Hu, Guohao Jian et al.

AAAI 2025paperarXiv:2407.20584
21
citations

Reinforced Multi-teacher Knowledge Distillation for Efficient General Image Forgery Detection and Localization

Zeqin Yu, Jiangqun Ni, Jian Zhang et al.

AAAI 2025paperarXiv:2504.05224
4
citations

Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising

Junyi Li, Zhilu Zhang, Wangmeng Zuo

AAAI 2025paperarXiv:2404.07846
19
citations

Self-Attentive Spatio-Temporal Calibration for Precise Intermediate Layer Matching in ANN-to-SNN Distillation

Di Hong, Yueming Wang

AAAI 2025paperarXiv:2501.08049
1
citations

Spatial-Temporal Knowledge Distillation for Takeaway Recommendation

Shuyuan Zhao, Wei Chen, Boyan Shi et al.

AAAI 2025paperarXiv:2412.16502
1
citations

TinySAM: Pushing the Envelope for Efficient Segment Anything Model

Han Shu, Wenshuo Li, Yehui Tang et al.

AAAI 2025paperarXiv:2312.13789
41
citations

AltDiffusion: A Multilingual Text-to-Image Diffusion Model

Fulong Ye, Guang Liu, Xinya Wu et al.

AAAI 2024paperarXiv:2308.09991
47
citations

Boosting Residual Networks with Group Knowledge

Shengji Tang, Peng Ye, Baopu Li et al.

AAAI 2024paperarXiv:2308.13772
6
citations

Building Variable-Sized Models via Learngene Pool

Boyu Shi, Shiyu Xia, Xu Yang et al.

AAAI 2024paperarXiv:2312.05743
5
citations

COMBHelper: A Neural Approach to Reduce Search Space for Graph Combinatorial Problems

Hao Tian, Sourav Medya, Wei Ye

AAAI 2024paperarXiv:2312.09086
5
citations

Cooperative Knowledge Distillation: A Learner Agnostic Approach

Michael Livanos, Ian Davidson, Stephen Wong

AAAI 2024paperarXiv:2402.05942
1
citations

CSL: Class-Agnostic Structure-Constrained Learning for Segmentation including the Unseen

Hao Zhang, Fang Li, Lu Qi et al.

AAAI 2024paperarXiv:2312.05538
16
citations

Distilling Autoregressive Models to Obtain High-Performance Non-autoregressive Solvers for Vehicle Routing Problems with Faster Inference Speed

Yubin Xiao, Di Wang, Boyang Li et al.

AAAI 2024paperarXiv:2312.12469
32
citations

DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition

Sijie Wang, Rui She, Qiyu Kang et al.

AAAI 2024paperarXiv:2312.10616
11
citations

Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning

Yan Fan, Yu Wang, Pengfei Zhu et al.

AAAI 2024paperarXiv:2312.16409
11
citations

EPSD: Early Pruning with Self-Distillation for Efficient Model Compression

Dong Chen, Ning Liu, Yichen Zhu et al.

AAAI 2024paperarXiv:2402.00084
9
citations

Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders

Bumsoo Kim, Jinhyung Kim, Yeonsik Jo et al.

AAAI 2024paperarXiv:2312.12659
5
citations

Federated Learning with Extremely Noisy Clients via Negative Distillation

Yang Lu, Lin Chen, Yonggang Zhang et al.

AAAI 2024paperarXiv:2312.12703
21
citations

Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning

Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.

AAAI 2024paperarXiv:2312.12722
13
citations

Generative Model-Based Feature Knowledge Distillation for Action Recognition

Guiqin Wang, Peng Zhao, Yanjiang Shi et al.

AAAI 2024paperarXiv:2312.08644
7
citations

Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning

Jiangmeng Li, Yifan Jin, Hang Gao et al.

AAAI 2024paperarXiv:2312.14222
9
citations

Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval

Zhe Ma, Jianfeng Dong, Shouling Ji et al.

AAAI 2024paperarXiv:2312.09716
12
citations

SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection

Haimei Zhao, Qiming Zhang, Shanshan Zhao et al.

AAAI 2024paperarXiv:2303.16818
25
citations

Simple Image-Level Classification Improves Open-Vocabulary Object Detection

Ruohuan Fang, Guansong Pang, Xiao Bai

AAAI 2024paperarXiv:2312.10439
23
citations

SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit Differentiation

Malyaban Bal, Abhronil Sengupta

AAAI 2024paperarXiv:2308.10873
73
citations

Summarizing Stream Data for Memory-Constrained Online Continual Learning

Jianyang Gu, Kai Wang, Wei Jiang et al.

AAAI 2024paperarXiv:2305.16645
23
citations

Sunshine to Rainstorm: Cross-Weather Knowledge Distillation for Robust 3D Object Detection

Xun Huang, Hai Wu, Xin Li et al.

AAAI 2024paperarXiv:2402.18493
15
citations

Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation

Hyunjune Shin, Dong-Wan Choi

AAAI 2024paperarXiv:2402.12406
7
citations