"knowledge distillation" Papers
210 papers found • Page 2 of 5
Conference
HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation
Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.
Improving Language Model Distillation through Hidden State Matching
Sayantan Dasgupta, Trevor Cohn
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee, Seungju Cho, Changick Kim
Interaction-Centric Knowledge Infusion and Transfer for Open Vocabulary Scene Graph Generation
Lin Li, Chuhan ZHANG, Dong Zhang et al.
It Helps to Take a Second Opinion: Teaching Smaller LLMs To Deliberate Mutually via Selective Rationale Optimisation
Sohan Patnaik, Milan Aggarwal, Sumit Bhatia et al.
Joint Diffusion Models in Continual Learning
Paweł Skierś, Kamil Deja
KINDLE: Knowledge-Guided Distillation for Prior-Free Gene Regulatory Network Inference
Rui Peng, Yuchen Lu, Qichen Sun et al.
Knowledge Distillation of Uncertainty using Deep Latent Factor Model
Sehyun Park, Jongjin Lee, Yunseop Shin et al.
Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Simiao Li, Yun Zhang, Wei Li et al.
Learning Diagrams: A Graphical Language for Compositional Training Regimes
Mason Lary, Richard Samuelson, Alexander Wilentz et al.
Learning Occlusion-Robust Vision Transformers for Real-Time UAV Tracking
You Wu, Xucheng Wang, Xiangyang Yang et al.
Learning Task-Agnostic Representations through Multi-Teacher Distillation
Philippe Formont, Maxime Darrin, Banafsheh Karimian et al.
Lightweight Contrastive Distilled Hashing for Online Cross-modal Retrieval
Jiaxing Li, Lin Jiang, Zeqi Ma et al.
LLaMaFlex: Many-in-one LLMs via Generalized Pruning and Weight Sharing
Ruisi Cai, Saurav Muralidharan, Hongxu Yin et al.
LLaVA-KD: A Framework of Distilling Multimodal Large Language Models
Yuxuan Cai, Jiangning Zhang, Haoyang He et al.
LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation
Fangxun Shu, Yue Liao, Lei Zhang et al.
Local Dense Logit Relations for Enhanced Knowledge Distillation
Liuchi Xu, Kang Liu, Jinshuai Liu et al.
Medium-Difficulty Samples Constitute Smoothed Decision Boundary for Knowledge Distillation on Pruned Datasets
Yudong Chen, Xuwei Xu, Frank de Hoog et al.
Multi-modal Knowledge Distillation-based Human Trajectory Forecasting
Jaewoo Jeong, Seohee Lee, Daehee Park et al.
Multi-order Orchestrated Curriculum Distillation for Model-Heterogeneous Federated Graph Learning
Guancheng Wan, Xu Cheng, Run Liu et al.
MURKA: Multi-Reward Reinforcement Learning with Knowledge Alignment for Optimization Tasks
WANTONG XIE, Yi-Xiang Hu, Jieyang Xu et al.
Neural Collapse Inspired Knowledge Distillation
Shuoxi Zhang, Zijian Song, Kun He
Neural Tangent Knowledge Distillation for Optical Convolutional Networks
Jinlin Xiang, Minho Choi, Yubo Zhang et al.
On LLM Knowledge Distillation - A Comparison between Forward KL and Reverse KL
Yihan Cao, Yanbin Kang
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric Michaud, Asher Parker-Sartori, Max Tegmark
PLD: A Choice-Theoretic List-Wise Knowledge Distillation
Ejafa Bassam, Dawei Zhu, Kaigui Bian
Point-SAM: Promptable 3D Segmentation Model for Point Clouds
Yuchen Zhou, Jiayuan Gu, Tung Chiang et al.
Preference Distillation via Value based Reinforcement Learning
Minchan Kwon, Junwon Ko, Kangil kim et al.
Preference-driven Knowledge Distillation for Few-shot Node Classification
Xing Wei, Chunchun Chen, Rui Fan et al.
Prevalence of Negative Transfer in Continual Reinforcement Learning: Analyses and a Simple Baseline
Hongjoon Ahn, Jinu Hyeon, Youngmin Oh et al.
Pruning Large Language Models with Semi-Structural Adaptive Sparse Training
Weiyu Huang, Yuezhou Hu, Guohao Jian et al.
Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression
Dohyun Kim, Sehwan Park, GeonHee Han et al.
RCTDistill: Cross-Modal Knowledge Distillation Framework for Radar-Camera 3D Object Detection with Temporal Fusion
Geonho Bang, Minjae Seong, Jisong Kim et al.
Reinforced Multi-teacher Knowledge Distillation for Efficient General Image Forgery Detection and Localization
Zeqin Yu, Jiangqun Ni, Jian Zhang et al.
Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising
Junyi Li, Zhilu Zhang, Wangmeng Zuo
RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator
SangYong Lee, Sangjun Chung, Simon Woo
Scale-aware Recognition in Satellite Images under Resource Constraints
Shreelekha Revankar, Cheng Perng Phoo, Utkarsh Kumar Mall et al.
SDGOCC: Semantic and Depth-Guided Bird's-Eye View Transformation for 3D Multimodal Occupancy Prediction
ZaiPeng Duan, Xuzhong Hu, Pei An et al.
Self-Attentive Spatio-Temporal Calibration for Precise Intermediate Layer Matching in ANN-to-SNN Distillation
Di Hong, Yueming Wang
Self-Updatable Large Language Models by Integrating Context into Model Parameters
Yu Wang, Xinshuang Liu, Xiusi Chen et al.
SelKD: Selective Knowledge Distillation via Optimal Transport Perspective
Liangliang Shi, Zhengyan Shi, Junchi Yan
Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
Seonghoon Yu, Dongjun Nam, Dina Katabi et al.
SLMRec: Distilling Large Language Models into Small for Sequential Recommendation
Wujiang Xu, Qitian Wu, Zujie Liang et al.
Spatial-Temporal Knowledge Distillation for Takeaway Recommendation
Shuyuan Zhao, Wei Chen, Boyan Shi et al.
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.
Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields
Gang Wan, Qinlong Lan, Zihan Li et al.
SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning
Yang Liu, Ming Ma, Xiaomin Yu et al.
SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs
Ruyue Liu, Rong Yin, Xiangzhen Bo et al.
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu, Shengcao Cao, Yu-Xiong Wang
Synergy Between the Strong and the Weak: Spiking Neural Networks are Inherently Self-Distillers
Yongqi Ding, Lin Zuo, Mengmeng Jing et al.