Poster "sample complexity" Papers

37 papers found

Breaking Neural Network Scaling Laws with Modularity

Akhilan Boopathy, Sunshine Jiang, William Yue et al.

ICLR 2025arXiv:2409.05780
6
citations

Complete-Tree Space Favors Data-Efficient Link Prediction

Chi Gao, Lukai Li, Yancheng Zhou et al.

ICML 2025

Deployment Efficient Reward-Free Exploration with Linear Function Approximation

Zihan Zhang, Yuxin Chen, Jason Lee et al.

NEURIPS 2025

Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements

Arya Mazumdar, Neha Sangwan

ICML 2025arXiv:2502.16008
2
citations

Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning

Yang Xu, Washim Mondal, Vaneet Aggarwal

NEURIPS 2025arXiv:2502.16816
8
citations

Finite-Time Analysis of Stochastic Nonconvex Nonsmooth Optimization on the Riemannian Manifolds

Emre Sahinoglu, Youbang Sun, Shahin Shahrampour

NEURIPS 2025arXiv:2510.21468

Formal Models of Active Learning from Contrastive Examples

Farnam Mansouri, Hans Simon, Adish Singla et al.

NEURIPS 2025arXiv:2506.15893
1
citations

Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging

Behrooz Tahmasebi, Stefanie Jegelka

ICLR 2025
3
citations

Learning Hierarchical Polynomials of Multiple Nonlinear Features

Hengyu Fu, Zihao Wang, Eshaan Nichani et al.

ICLR 2025arXiv:2411.17201
4
citations

Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Maryam Aliakbarpour, Zhan Shi, Ria Stevens et al.

NEURIPS 2025arXiv:2506.01162

Non-Convex Tensor Recovery from Tube-Wise Sensing

Tongle Wu, Ying Sun

NEURIPS 2025

On the Convergence of Single-Timescale Actor-Critic

Navdeep Kumar, Priyank Agrawal, Giorgia Ramponi et al.

NEURIPS 2025arXiv:2410.08868
2
citations

On the Sample Complexity of Differentially Private Policy Optimization

Yi He, Xingyu Zhou

NEURIPS 2025arXiv:2510.21060

Revisiting Agnostic Boosting

Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice et al.

NEURIPS 2025arXiv:2503.09384
1
citations

Simple and Optimal Sublinear Algorithms for Mean Estimation

Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn et al.

NEURIPS 2025arXiv:2406.05254

Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits

Ziyi Zhang, Yorie Nakahira, Guannan Qu

NEURIPS 2025
1
citations

Streaming Federated Learning with Markovian Data

Khiem HUYNH, Malcolm Egan, Giovanni Neglia et al.

NEURIPS 2025arXiv:2503.18807

Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context

Taejong Joo, Diego Klabjan

NEURIPS 2025arXiv:2502.04580

Tight Bounds for Answering Adaptively Chosen Concentrated Queries

Emma Rapoport, Edith Cohen, Uri Stemmer

NEURIPS 2025arXiv:2507.13700

Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces

Ziyi Chen, Heng Huang

ICML 2024

An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization

Emre Sahinoglu, Shahin Shahrampour

ICML 2024arXiv:2406.01484
9
citations

A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs

Kihyuk Hong, Ambuj Tewari

ICML 2024arXiv:2402.04493
4
citations

Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays

Qingyuan Wu, Simon Zhan, Yixuan Wang et al.

ICML 2024arXiv:2402.03141
4
citations

Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Jiabin Lin, Shana Moothedath, Namrata Vaswani

ICML 2024arXiv:2410.02068
8
citations

Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning

Tianchen Zhou, Hairi, Haibo Yang et al.

ICML 2024arXiv:2405.03082
3
citations

From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers

Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.

ICML 2024arXiv:2402.13512
36
citations

Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity

Marta Catalano, Hugo Lavenant

ICML 2024arXiv:2402.00423
5
citations

Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games

Songtao Feng, Ming Yin, Yu-Xiang Wang et al.

ICML 2024arXiv:2308.08858
2
citations

Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective

Lei Zhao, Mengdi Wang, Yu Bai

ICML 2024arXiv:2312.00054
3
citations

Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL

Jiawei Huang, Niao He, Andreas Krause

ICML 2024arXiv:2402.05724
8
citations

Multi-group Learning for Hierarchical Groups

Samuel Deng, Daniel Hsu

ICML 2024arXiv:2402.00258
5
citations

Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation

Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans et al.

ICML 2024arXiv:2402.13531
11
citations

Reward-Free Kernel-Based Reinforcement Learning

Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.

ICML 2024

Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks

Lorenzo Bardone, Sebastian Goldt

ICML 2024arXiv:2404.08602
11
citations

Switching the Loss Reduces the Cost in Batch Reinforcement Learning

Alex Ayoub, Kaiwen Wang, Vincent Liu et al.

ICML 2024

Symmetric Single Index Learning

Aaron Zweig, Joan Bruna

ICLR 2024arXiv:2310.02117
4
citations

Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection

Nils Palumbo, Yang Guo, Xi Wu et al.

ICML 2024arXiv:2305.17528