"sample complexity" Papers
52 papers found • Page 1 of 2
Conference
Agnostic Active Learning Is Always Better Than Passive Learning
Steve Hanneke
Breaking Neural Network Scaling Laws with Modularity
Akhilan Boopathy, Sunshine Jiang, William Yue et al.
Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning
Laixi Shi, Jingchu Gai, Eric Mazumdar et al.
Complete-Tree Space Favors Data-Efficient Link Prediction
Chi Gao, Lukai Li, Yancheng Zhou et al.
Deployment Efficient Reward-Free Exploration with Linear Function Approximation
Zihan Zhang, Yuxin Chen, Jason Lee et al.
Exact Recovery of Sparse Binary Vectors from Generalized Linear Measurements
Arya Mazumdar, Neha Sangwan
Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning
Yang Xu, Washim Mondal, Vaneet Aggarwal
Finite-Time Analysis of Stochastic Nonconvex Nonsmooth Optimization on the Riemannian Manifolds
Emre Sahinoglu, Youbang Sun, Shahin Shahrampour
Formal Models of Active Learning from Contrastive Examples
Farnam Mansouri, Hans Simon, Adish Singla et al.
Generalization Analysis for Deep Contrastive Representation Learning
Nong Minh Hieu, Antoine Ledent, Yunwen Lei et al.
Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging
Behrooz Tahmasebi, Stefanie Jegelka
Geometry Meets Incentives: Sample-Efficient Incentivized Exploration with Linear Contexts
Ben Schiffer, Mark Sellke
Learning Hierarchical Polynomials of Multiple Nonlinear Features
Hengyu Fu, Zihao Wang, Eshaan Nichani et al.
Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor
Maryam Aliakbarpour, Zhan Shi, Ria Stevens et al.
Non-Convex Tensor Recovery from Tube-Wise Sensing
Tongle Wu, Ying Sun
On the Convergence of Single-Timescale Actor-Critic
Navdeep Kumar, Priyank Agrawal, Giorgia Ramponi et al.
On the Sample Complexity of Differentially Private Policy Optimization
Yi He, Xingyu Zhou
Position: Algebra Unveils Deep Learning - An Invitation to Neuroalgebraic Geometry
Giovanni Luca Marchetti, Vahid Shahverdi, Stefano Mereta et al.
Product Distribution Learning with Imperfect Advice
Arnab Bhattacharyya, XianJun, Davin Choo, Philips George John et al.
Replicable Distribution Testing
Ilias Diakonikolas, Jingyi Gao, Daniel Kane et al.
Revisiting Agnostic Boosting
Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice et al.
Sample-Adaptivity Tradeoff in On-Demand Sampling
Nika Haghtalab, Omar Montasser, Mingda Qiao
Simple and Optimal Sublinear Algorithms for Mean Estimation
Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn et al.
Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits
Ziyi Zhang, Yorie Nakahira, Guannan Qu
Streaming Federated Learning with Markovian Data
Khiem HUYNH, Malcolm Egan, Giovanni Neglia et al.
Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context
Taejong Joo, Diego Klabjan
Tight Bounds for Answering Adaptively Chosen Concentrated Queries
Emma Rapoport, Edith Cohen, Uri Stemmer
Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces
Ziyi Chen, Heng Huang
An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks
Zhifa Ke, Zaiwen Wen, Junyu Zhang
An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization
Emre Sahinoglu, Shahin Shahrampour
A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs
Kihyuk Hong, Ambuj Tewari
A Theory of Fault-Tolerant Learning
Changlong Wu, Yifan Wang, Ananth Grama
Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays
Qingyuan Wu, Simon Zhan, Yixuan Wang et al.
Eliciting Kemeny Rankings
Anne-Marie George, Christos Dimitrakakis
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Faster Adaptive Decentralized Learning Algorithms
Feihu Huang, jianyu zhao
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning
Tianchen Zhou, Hairi, Haibo Yang et al.
From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers
Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.
Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity
Marta Catalano, Hugo Lavenant
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers
Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson et al.
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games
Songtao Feng, Ming Yin, Yu-Xiang Wang et al.
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective
Lei Zhao, Mengdi Wang, Yu Bai
Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL
Jiawei Huang, Niao He, Andreas Krause
Multi-group Learning for Hierarchical Groups
Samuel Deng, Daniel Hsu
Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation
Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans et al.
Replicable Learning of Large-Margin Halfspaces
Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen et al.
Reward-Free Kernel-Based Reinforcement Learning
Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.
Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge
Meshal Alharbi, Mardavij Roozbehani, Munther Dahleh
Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks
Lorenzo Bardone, Sebastian Goldt
Switching the Loss Reduces the Cost in Batch Reinforcement Learning
Alex Ayoub, Kaiwen Wang, Vincent Liu et al.