α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Sanjeev Arora
Sanjeev Arora
22
papers
1,827
total citations
papers (22)
LESS: Selecting Influential Data for Targeted Instruction Tuning
ICML 2024
arXiv
400
citations
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
NEURIPS 2021
arXiv
357
citations
Fine-Tuning Language Models with Just Forward Passes
NEURIPS 2023
arXiv
329
citations
On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)
NEURIPS 2021
arXiv
98
citations
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
NEURIPS 2022
arXiv
89
citations
On the SDEs and Scaling Rules for Adaptive Gradient Algorithms
NEURIPS 2022
arXiv
84
citations
Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias
NEURIPS 2021
arXiv
84
citations
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving
COLM 2025
arXiv
82
citations
Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate
NEURIPS 2020
arXiv
78
citations
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
NEURIPS 2020
arXiv
56
citations
Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
ICLR 2025
arXiv
51
citations
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent
NEURIPS 2022
arXiv
33
citations
Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning
ICLR 2025
arXiv
19
citations
Language Models as Science Tutors
ICML 2024
arXiv
15
citations
Trainable Transformer in Transformer
ICML 2024
arXiv
14
citations
Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?
ICML 2025
arXiv
9
citations
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
NEURIPS 2022
arXiv
8
citations
On the Power of Context-Enhanced Learning in LLMs
ICML 2025
arXiv
6
citations
Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving of Inequalities
NEURIPS 2025
arXiv
6
citations
A Quadratic Synchronization Rule for Distributed Deep Learning
ICLR 2024
arXiv
4
citations
AdaptMI: Adaptive Skill-based In-context Math Instructions for Small Language Models
COLM 2025
arXiv
3
citations
Provable unlearning in topic modeling and downstream tasks
ICLR 2025
arXiv
2
citations