"activation sparsity" Papers
8 papers found
Conference
Activity Pruning for Efficient Spiking Neural Networks
Tong Bu, Xinyu Shi, Zhaofei Yu
NEURIPS 2025
BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity
Chenyang Song, Weilin Zhao, Xu Han et al.
COLM 2025paperarXiv:2507.08771
1
citations
DuoGPT: Training-free Dual Sparsity through Activation-aware Pruning in LLMs
Ruokai Yin, Yuhang Li, Donghyun Lee et al.
NEURIPS 2025arXiv:2506.20194
2
citations
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers
Bharat Runwal, Tejaswini Pedapati, Pin-Yu Chen
AAAI 2025paperarXiv:2402.01911
8
citations
Spark Transformer: Reactivating Sparsity in Transformer FFN and Attention
Chong You, Kan Wu, Zhipeng Jia et al.
NEURIPS 2025
2
citations
SURGEON: Memory-Adaptive Fully Test-Time Adaptation via Dynamic Activation Sparsity
Ke Ma, Jiaqi Tang, Bin Guo et al.
CVPR 2025highlightarXiv:2503.20354
4
citations
Training-Free Activation Sparsity in Large Language Models
James Liu, Pragaash Ponnusamy, Tianle Cai et al.
ICLR 2025arXiv:2408.14690
39
citations
Exploring the Benefit of Activation Sparsity in Pre-training
Zhengyan Zhang, Chaojun Xiao, Qiujieli Qin et al.
ICML 2024arXiv:2410.03440
6
citations