"sample complexity" Papers

52 papers found • Page 1 of 2

Agnostic Active Learning Is Always Better Than Passive Learning

Steve Hanneke

NEURIPS 2025oral

Breaking Neural Network Scaling Laws with Modularity

Akhilan Boopathy, Sunshine Jiang, William Yue et al.

ICLR 2025arXiv:2409.05780
6
citations

Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning

Laixi Shi, Jingchu Gai, Eric Mazumdar et al.

ICML 2025oralarXiv:2409.20067
4
citations

Complete-Tree Space Favors Data-Efficient Link Prediction

Chi Gao, Lukai Li, Yancheng Zhou et al.

ICML 2025

Deployment Efficient Reward-Free Exploration with Linear Function Approximation

Zihan Zhang, Yuxin Chen, Jason Lee et al.

NEURIPS 2025

Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements

Arya Mazumdar, Neha Sangwan

ICML 2025arXiv:2502.16008
2
citations

Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning

Yang Xu, Washim Mondal, Vaneet Aggarwal

NEURIPS 2025arXiv:2502.16816
8
citations

Finite-Time Analysis of Stochastic Nonconvex Nonsmooth Optimization on the Riemannian Manifolds

Emre Sahinoglu, Youbang Sun, Shahin Shahrampour

NEURIPS 2025arXiv:2510.21468

Formal Models of Active Learning from Contrastive Examples

Farnam Mansouri, Hans Simon, Adish Singla et al.

NEURIPS 2025arXiv:2506.15893
1
citations

Generalization Analysis for Deep Contrastive Representation Learning

Nong Minh Hieu, Antoine Ledent, Yunwen Lei et al.

AAAI 2025paperarXiv:2412.12014
4
citations

Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging

Behrooz Tahmasebi, Stefanie Jegelka

ICLR 2025
3
citations

Geometry Meets Incentives: Sample-Efficient Incentivized Exploration with Linear Contexts

Ben Schiffer, Mark Sellke

NEURIPS 2025spotlightarXiv:2506.01685

Learning Hierarchical Polynomials of Multiple Nonlinear Features

Hengyu Fu, Zihao Wang, Eshaan Nichani et al.

ICLR 2025arXiv:2411.17201
4
citations

Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Maryam Aliakbarpour, Zhan Shi, Ria Stevens et al.

NEURIPS 2025arXiv:2506.01162

Non-Convex Tensor Recovery from Tube-Wise Sensing

Tongle Wu, Ying Sun

NEURIPS 2025

On the Convergence of Single-Timescale Actor-Critic

Navdeep Kumar, Priyank Agrawal, Giorgia Ramponi et al.

NEURIPS 2025arXiv:2410.08868
2
citations

On the Sample Complexity of Differentially Private Policy Optimization

Yi He, Xingyu Zhou

NEURIPS 2025arXiv:2510.21060

Position: Algebra Unveils Deep Learning - An Invitation to Neuroalgebraic Geometry

Giovanni Luca Marchetti, Vahid Shahverdi, Stefano Mereta et al.

ICML 2025spotlight
9
citations

Product Distribution Learning with Imperfect Advice

Arnab Bhattacharyya, XianJun, Davin Choo, Philips George John et al.

NEURIPS 2025spotlightarXiv:2511.10366

Replicable Distribution Testing

Ilias Diakonikolas, Jingyi Gao, Daniel Kane et al.

NEURIPS 2025spotlightarXiv:2507.02814

Revisiting Agnostic Boosting

Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice et al.

NEURIPS 2025arXiv:2503.09384
1
citations

Sample-Adaptivity Tradeoff in On-Demand Sampling

Nika Haghtalab, Omar Montasser, Mingda Qiao

NEURIPS 2025spotlightarXiv:2511.15507

Simple and Optimal Sublinear Algorithms for Mean Estimation

Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn et al.

NEURIPS 2025arXiv:2406.05254

Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits

Ziyi Zhang, Yorie Nakahira, Guannan Qu

NEURIPS 2025
1
citations

Streaming Federated Learning with Markovian Data

Khiem HUYNH, Malcolm Egan, Giovanni Neglia et al.

NEURIPS 2025arXiv:2503.18807

Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context

Taejong Joo, Diego Klabjan

NEURIPS 2025arXiv:2502.04580

Tight Bounds for Answering Adaptively Chosen Concentrated Queries

Emma Rapoport, Edith Cohen, Uri Stemmer

NEURIPS 2025arXiv:2507.13700

Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces

Ziyi Chen, Heng Huang

ICML 2024

An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks

Zhifa Ke, Zaiwen Wen, Junyu Zhang

ICML 2024oralarXiv:2405.04017
1
citations

An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization

Emre Sahinoglu, Shahin Shahrampour

ICML 2024arXiv:2406.01484
9
citations

A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs

Kihyuk Hong, Ambuj Tewari

ICML 2024arXiv:2402.04493
4
citations

A Theory of Fault-Tolerant Learning

Changlong Wu, Yifan Wang, Ananth Grama

ICML 2024spotlight

Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays

Qingyuan Wu, Simon Zhan, Yixuan Wang et al.

ICML 2024arXiv:2402.03141
4
citations

Eliciting Kemeny Rankings

Anne-Marie George, Christos Dimitrakakis

AAAI 2024paperarXiv:2312.11663
1
citations

Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Jiabin Lin, Shana Moothedath, Namrata Vaswani

ICML 2024arXiv:2410.02068
8
citations

Faster Adaptive Decentralized Learning Algorithms

Feihu Huang, jianyu zhao

ICML 2024spotlightarXiv:2408.09775
3
citations

Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning

Tianchen Zhou, Hairi, Haibo Yang et al.

ICML 2024arXiv:2405.03082
3
citations

From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers

Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.

ICML 2024arXiv:2402.13512
36
citations

Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity

Marta Catalano, Hugo Lavenant

ICML 2024arXiv:2402.00423
5
citations

How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers

Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson et al.

ICML 2024spotlightarXiv:2402.06323
10
citations

Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games

Songtao Feng, Ming Yin, Yu-Xiang Wang et al.

ICML 2024arXiv:2308.08858
2
citations

Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective

Lei Zhao, Mengdi Wang, Yu Bai

ICML 2024arXiv:2312.00054
3
citations

Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL

Jiawei Huang, Niao He, Andreas Krause

ICML 2024arXiv:2402.05724
8
citations

Multi-group Learning for Hierarchical Groups

Samuel Deng, Daniel Hsu

ICML 2024arXiv:2402.00258
5
citations

Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation

Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans et al.

ICML 2024arXiv:2402.13531
11
citations

Replicable Learning of Large-Margin Halfspaces

Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen et al.

ICML 2024spotlightarXiv:2402.13857
12
citations

Reward-Free Kernel-Based Reinforcement Learning

Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.

ICML 2024

Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge

Meshal Alharbi, Mardavij Roozbehani, Munther Dahleh

AAAI 2024paperarXiv:2312.12558
4
citations

Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks

Lorenzo Bardone, Sebastian Goldt

ICML 2024arXiv:2404.08602
11
citations

Switching the Loss Reduces the Cost in Batch Reinforcement Learning

Alex Ayoub, Kaiwen Wang, Vincent Liu et al.

ICML 2024
PreviousNext