Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit

6
citations
#1138
in AAAI 2024
of 2289 papers
2
Top Authors
4
Data Points

Abstract

We study the real-valued combinatorial pure exploration of the multi-armed bandit (R-CPE-MAB) problem. In R-CPE-MAB, a player is given $d$ stochastic arms, and the reward of each arm $s\in\{1, \ldots, d\}$ follows an unknown distribution with mean $μ_s$. In each time step, a player pulls a single arm and observes its reward. The player's goal is to identify the optimal action $\boldsymbolπ^{*} = \argmax_{\boldsymbolπ \in \mathcal{A}} \boldsymbolμ^{\top}\boldsymbolπ$ from a finite-sized real-valued action set $\mathcal{A}\subset \mathbb{R}^{d}$ with as few arm pulls as possible. Previous methods in the R-CPE-MAB assume that the size of the action set $\mathcal{A}$ is polynomial in $d$. We introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which is the first algorithm that can work even when the size of the action set is exponentially large in $d$. We also introduce a novel problem-dependent sample complexity lower bound of the R-CPE-MAB problem, and show that the GenTS-Explore algorithm achieves the optimal sample complexity up to a problem-dependent constant factor.

Citation History

Jan 28, 2026
0
Feb 13, 2026
6+6
Feb 13, 2026
6
Feb 13, 2026
6