α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Thomas Hofmann
Thomas Hofmann
1
Affiliations
Affiliations
ETH Zurich
26
papers
711
total citations
papers (26)
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers
NEURIPS 2023
arXiv
72
citations
Convolutional Generation of Textured 3D Meshes
NEURIPS 2020
arXiv
68
citations
Learning Generative Models of Textured 3D Meshes From Real-World Images
ICCV 2021
arXiv
57
citations
Achieving a Better Stability-Plasticity Trade-Off via Auxiliary Networks in Continual Learning
CVPR 2023
arXiv
57
citations
Scaling MLPs: A Tale of Inductive Bias
NEURIPS 2023
arXiv
54
citations
Simplifying Transformer Blocks
ICLR 2024
arXiv
49
citations
Analytic Insights into Structure and Rank of Neural Network Hessian Maps
NEURIPS 2021
arXiv
47
citations
The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit
NEURIPS 2023
arXiv
47
citations
A Language Model’s Guide Through Latent Space
ICML 2024
arXiv
44
citations
Precise characterization of the prior predictive distribution of deep ReLU networks
NEURIPS 2021
arXiv
36
citations
Controlling Style and Semantics in Weakly-Supervised Image Generation
ECCV 2020
arXiv
35
citations
Transformer Fusion with Optimal Transport
ICLR 2024
arXiv
31
citations
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
NEURIPS 2021
arXiv
30
citations
OpenFilter: A Framework to Democratize Research Access to Social Media AR Filters
NEURIPS 2022
arXiv
14
citations
Recurrent Distance Filtering for Graph Representation Learning
ICML 2024
arXiv
13
citations
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
NEURIPS 2020
arXiv
13
citations
LoRACLR: Contrastive Adaptation for Customization of Diffusion Models
CVPR 2025
arXiv
12
citations
The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
NEURIPS 2025
arXiv
10
citations
On the Expressiveness and Length Generalization of Selective State Space Models on Regular Languages
AAAI 2025
7
citations
Batch normalization provably avoids ranks collapse for randomly initialised deep networks
NEURIPS 2020
arXiv
4
citations
Mastering Spatial Graph Prediction of Road Networks
ICCV 2023
arXiv
3
citations
The Importance of Being Lazy: Scaling Limits of Continual Learning
ICML 2025
arXiv
2
citations
Navigating Scaling Laws: Compute Optimality in Adaptive Model Training
ICML 2024
arXiv
2
citations
The Directionality of Optimization Trajectories in Neural Networks
ICLR 2025
2
citations
Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment
ICML 2025
arXiv
1
citations
UIP2P: Unsupervised Instruction-based Image Editing via Edit Reversibility Constraint
ICCV 2025
arXiv
1
citations