α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Zhenheng Tang
Zhenheng Tang
8
papers
160
total citations
papers (8)
Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models
ICML 2024
arXiv
54
citations
STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs
ICLR 2025
arXiv
32
citations
FedImpro: Measuring and Improving Client Update in Federated Learning
ICLR 2024
arXiv
23
citations
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference
NEURIPS 2025
arXiv
16
citations
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
ICLR 2025
arXiv
11
citations
ParZC: Parametric Zero-Cost Proxies for Efficient NAS
AAAI 2025
arXiv
10
citations
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
ICML 2025
arXiv
10
citations
Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection
ICLR 2025
4
citations