α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Cengiz Pehlevan
Cengiz Pehlevan
27
papers
734
total citations
papers (27)
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
NEURIPS 2022
arXiv
116
citations
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
COLM 2025
arXiv
87
citations
A Dynamical Model of Neural Scaling Laws
ICML 2024
arXiv
77
citations
Scaling Laws for Precision
ICLR 2025
arXiv
68
citations
Grokking as the transition from lazy to rich training dynamics
ICLR 2024
arXiv
68
citations
Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit
ICLR 2024
arXiv
49
citations
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
NEURIPS 2023
arXiv
42
citations
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
NEURIPS 2023
arXiv
40
citations
Asymptotics of representation learning in finite Bayesian neural networks
NEURIPS 2021
arXiv
38
citations
Attention Approximates Sparse Distributed Memory
NEURIPS 2021
arXiv
37
citations
Long Sequence Hopfield Memory
NEURIPS 2023
arXiv
25
citations
Out-of-Distribution Generalization in Kernel Regression
NEURIPS 2021
arXiv
21
citations
Learning Curves for Deep Structured Gaussian Feature Models
NEURIPS 2023
arXiv
14
citations
The Optimization Landscape of SGD Across the Feature Learning Strength
ICLR 2025
arXiv
12
citations
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
ICML 2025
arXiv
9
citations
Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources
NEURIPS 2022
arXiv
8
citations
Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons
NEURIPS 2020
arXiv
7
citations
A solvable model of learning generative diffusion: theory and insights
NEURIPS 2025
arXiv
5
citations
Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
NEURIPS 2023
arXiv
4
citations
A Model of Place Field Reorganization During Reward Maximization
ICML 2025
3
citations
No Free Lunch from Random Feature Ensembles: Scaling Laws and Near-Optimality Conditions
ICML 2025
arXiv
2
citations
Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles
NEURIPS 2023
arXiv
1
citations
Do Mice Grok? Glimpses of Hidden Progress in Sensory Cortex
ICLR 2025
1
citations
Exact marginal prior distributions of finite Bayesian neural networks
NEURIPS 2021
arXiv
0
citations
Natural gradient enables fast sampling in spiking neural networks
NEURIPS 2022
0
citations
Neural Circuits for Fast Poisson Compressed Sensing in the Olfactory Bulb
NEURIPS 2023
0
citations
Loss Dynamics of Temporal Difference Reinforcement Learning
NEURIPS 2023
arXiv
0
citations