α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Christopher Ré
Christopher Ré
27
papers
7,817
total citations
papers (27)
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
NEURIPS 2022
arXiv
3,551
citations
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers
NEURIPS 2021
arXiv
977
citations
HiPPO: Recurrent Memory with Optimal Polynomial Projections
NEURIPS 2020
arXiv
838
citations
On the Parameterization and Initialization of Diagonal State Space Models
NEURIPS 2022
arXiv
492
citations
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution
NEURIPS 2023
arXiv
432
citations
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
NEURIPS 2023
arXiv
319
citations
No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems
NEURIPS 2020
arXiv
281
citations
Scatterbrain: Unifying Sparse and Low-rank Attention
NEURIPS 2021
arXiv
154
citations
Decentralized Training of Foundation Models in Heterogeneous Environments
NEURIPS 2022
arXiv
126
citations
From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering
NEURIPS 2020
arXiv
110
citations
Skill-it! A data-driven skills framework for understanding and training language models
NEURIPS 2023
arXiv
97
citations
Contrastive Adapters for Foundation Model Group Robustness
NEURIPS 2022
arXiv
85
citations
Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture
NEURIPS 2023
arXiv
67
citations
Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data
NEURIPS 2022
arXiv
67
citations
S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces
NEURIPS 2022
arXiv
55
citations
An Architecture Search Framework for Inference-Time Techniques
ICML 2025
arXiv
43
citations
Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions
NEURIPS 2023
arXiv
29
citations
Rethinking Neural Operations for Diverse Tasks
NEURIPS 2021
arXiv
25
citations
Transform Once: Efficient Operator Learning in Frequency Domain
NEURIPS 2022
arXiv
24
citations
TART: A plug-and-play Transformer module for task-agnostic reasoning
NEURIPS 2023
arXiv
16
citations
Cost-efficient Collaboration between On-device and Cloud Language Models
ICML 2025
arXiv
13
citations
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
NEURIPS 2022
arXiv
9
citations
Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification
NEURIPS 2023
arXiv
4
citations
ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels
ICLR 2025
3
citations
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
NEURIPS 2023
0
citations
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees
NEURIPS 2022
0
citations
A case for reframing automated medical image classification as segmentation
NEURIPS 2023
0
citations