α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Zhao Song
Zhao Song
25
papers
1,169
total citations
papers (25)
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
NEURIPS 2021
arXiv
357
citations
Scatterbrain: Unifying Sparse and Low-rank Attention
NEURIPS 2021
arXiv
154
citations
Fast Attention Requires Bounded Entries
NEURIPS 2023
arXiv
104
citations
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
NEURIPS 2020
arXiv
56
citations
Does Preprocessing Help Training Over-parameterized Neural Networks?
NEURIPS 2021
arXiv
51
citations
On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis
ICML 2024
arXiv
46
citations
Generalized Leverage Score Sampling for Neural Networks
NEURIPS 2020
arXiv
44
citations
How to Protect Copyright Data in Optimization of Large Language Models?
AAAI 2024
arXiv
40
citations
LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers
AAAI 2025
arXiv
38
citations
InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding
NEURIPS 2023
arXiv
35
citations
Low Rank Matrix Completion via Robust Alternating Minimization in Nearly Linear Time
ICLR 2024
arXiv
34
citations
Algorithm and Hardness for Dynamic Attention Maintenance in Large Language Models
ICML 2024
arXiv
34
citations
Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
NEURIPS 2021
arXiv
30
citations
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
NEURIPS 2023
arXiv
29
citations
Numerical Pruning for Efficient Autoregressive Models
AAAI 2025
arXiv
23
citations
Dynamic Tensor Product Regression
NEURIPS 2022
arXiv
22
citations
Unraveling the Smoothness Properties of Diffusion Models: A Gaussian Mixture Perspective
ICCV 2025
arXiv
21
citations
Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency
ICLR 2025
arXiv
18
citations
Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies
ICML 2025
arXiv
12
citations
Fast Distance Oracles for Any Symmetric Norm
NEURIPS 2022
arXiv
10
citations
Fundamental Limits of Visual Autoregressive Transformers: Universal Approximation Abilities
ICML 2025
5
citations
Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings
NEURIPS 2023
arXiv
5
citations
Binary Hypothesis Testing for Softmax Models and Leverage Score Models
ICML 2025
arXiv
1
citations
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
NEURIPS 2023
0
citations
Differential Privacy for Euclidean Jordan Algebra with Applications to Private Symmetric Cone Programming
NEURIPS 2025
arXiv
0
citations