α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Shiwei Liu
Shiwei Liu
1
Affiliations
Affiliations
UT Austin
17
papers
493
total citations
papers (17)
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
ICML 2024
arXiv
152
citations
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
NEURIPS 2021
arXiv
135
citations
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
NEURIPS 2023
arXiv
44
citations
Dynamic Sparse Network for Time Series Classification: Learning What to “See”
NEURIPS 2022
arXiv
36
citations
Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
AAAI 2025
arXiv
30
citations
Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN
ICLR 2025
arXiv
26
citations
Dynamic Sparsity Is Channel-Level Sparsity Learner
NEURIPS 2023
arXiv
26
citations
Predicting mutational effects on protein-protein binding via a side-chain diffusion probabilistic model
NEURIPS 2023
arXiv
23
citations
From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories, and Applications
ICML 2025
arXiv
20
citations
Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More
ICML 2025
arXiv
1
citations
Data Augmented Flatness-aware Gradient Projection for Continual Learning
ICCV 2023
0
citations
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights $\textit{Irreversibly}$ and $\textit{Monotonically}$ Impairs ``Difficult" Downstream Tasks in LLMs
ICML 2024
0
citations
Advancing Dynamic Sparse Training by Exploring Optimization Opportunities
ICML 2024
0
citations
Don’t just prune by magnitude! Your mask topology is a secret weapon
NEURIPS 2023
0
citations
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once
ICML 2024
0
citations
CaM: Cache Merging for Memory-efficient LLMs Inference
ICML 2024
0
citations
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
NEURIPS 2023
0
citations