Online Nonsubmodular Optimization with Delayed Feedback in the Bandit Setting

1
citations
#1733
in AAAI 2025
of 3028 papers
3
Top Authors
5
Data Points

Abstract

We investigate the online nonsubmodular optimization with delayed feedback in the bandit setting, where the loss function is $α$-weakly DR-submodular and $β$-weakly DR-supermodular. Previous work has established an $(α,β)$-regret bound of $\mathcal{O}(nd^{1/3}T^{2/3})$, where $n$ is the dimensionality and $d$ is the maximum delay. However, its regret bound relies on the maximum delay and is thus sensitive to irregular delays. Additionally, it couples the effects of delays and bandit feedback as its bound is the product of the delay term and the $\mathcal{O}(nT^{2/3})$ regret bound in the bandit setting without delayed feedback. In this paper, we develop two algorithms to address these limitations, respectively. Firstly, we propose a novel method, namely DBGD-NF, which employs the one-point gradient estimator and utilizes all the available estimated gradients in each round to update the decision. It achieves a better $\mathcal{O}(n\bar{d}^{1/3}T^{2/3})$ regret bound, which is relevant to the average delay $\bar{d} = \frac{1}{T}\sum_{t=1}^T d_t\leq d$. Secondly, we extend DBGD-NF by employing a blocking update mechanism to decouple the joint effect of the delays and bandit feedback, which enjoys an $\mathcal{O}(n(T^{2/3} + \sqrt{dT}))$ regret bound. When $d = \mathcal{O}(T^{1/3})$, our regret bound matches the $\mathcal{O}(nT^{2/3})$ bound in the bandit setting without delayed feedback. Compared to our first $\mathcal{O}(n\bar{d}^{1/3}T^{2/3})$ bound, it is more advantageous when the maximum delay $d = o(\bar{d}^{2/3}T^{1/3})$. Finally, we conduct experiments on structured sparse learning to demonstrate the superiority of our methods.

Citation History

Jan 27, 2026
1
Feb 4, 2026
1
Feb 13, 2026
1
Feb 13, 2026
1
Feb 13, 2026
1