α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Randall Balestriero
Randall Balestriero
20
papers
492
total citations
papers (20)
Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
NEURIPS 2022
arXiv
158
citations
The Effects of Regularization and Data Augmentation are Class Dependent
NEURIPS 2022
arXiv
112
citations
Deep Networks Always Grok and Here is Why
ICML 2024
arXiv
47
citations
projUNN: efficient method for training deep networks with unitary matrices
NEURIPS 2022
arXiv
37
citations
Polarity Sampling: Quality and Diversity Control of Pre-Trained Generative Networks via Singular Values
CVPR 2022
arXiv
32
citations
SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries
CVPR 2023
arXiv
28
citations
Learning from Reward-Free Offline Data: A Case for Planning with Latent Dynamics Models
NEURIPS 2025
arXiv
20
citations
Understanding the detrimental class-level effects of data augmentation
NEURIPS 2023
arXiv
18
citations
Active Self-Supervised Learning: A Few Low-Cost Relationships Are All You Need
ICCV 2023
arXiv
14
citations
$\mathbb{X}$-Sample Contrastive Loss: Improving Contrastive Learning with Sample Similarity Graphs
ICLR 2025
arXiv
13
citations
Characterizing Large Language Model Geometry Helps Solve Toxicity Detection and Generation
ICML 2024
arXiv
6
citations
Beyond [cls]: Exploring the True Potential of Masked Image Modeling Representations
ICCV 2025
arXiv
4
citations
Curvature Tuning: Provable Training-free Model Steering From a Single Parameter
NEURIPS 2025
arXiv
1
citations
FastDINOv2: Frequency Based Curriculum Learning Improves Robustness and Training Speed
NEURIPS 2025
arXiv
1
citations
Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum
NEURIPS 2025
arXiv
1
citations
A Data-Augmentation Is Worth A Thousand Samples: Analytical Moments And Sampling-Free Training
NEURIPS 2022
0
citations
Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks
NEURIPS 2020
0
citations
From Linearity to Non-Linearity: How Masked Autoencoders Capture Spatial Correlations
ICCV 2025
arXiv
0
citations
How Learning by Reconstruction Produces Uninformative Features For Perception
ICML 2024
0
citations
An Information Theory Perspective on Variance-Invariance-Covariance Regularization
NEURIPS 2023
0
citations