Online Learning with Cumulative Oversampling: Application to Budgeted Influence Maximization
We propose a cumulative oversampling (CO) method for online learning. Our key idea is to sample parameter estimations from the updated belief space once in each round (similar to Thompson Sampling), and utilize the cumulative samples up to the current round to construct optimistic parameter estimati...
Saved in:
| Main Authors | , , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
24.04.2020
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.2004.11963 |
Cover
| Summary: | We propose a cumulative oversampling (CO) method for online learning. Our key
idea is to sample parameter estimations from the updated belief space once in
each round (similar to Thompson Sampling), and utilize the cumulative samples
up to the current round to construct optimistic parameter estimations that
asymptotically concentrate around the true parameters as tighter upper
confidence bounds compared to the ones constructed with standard UCB methods.
We apply CO to a novel budgeted variant of the Influence Maximization (IM)
semi-bandits with linear generalization of edge weights, whose offline problem
is NP-hard. Combining CO with the oracle we design for the offline problem, our
online learning algorithm simultaneously tackles budget allocation, parameter
learning, and reward maximization. We show that for IM semi-bandits, our
CO-based algorithm achieves a scaled regret comparable to that of the UCB-based
algorithms in theory, and performs on par with Thompson Sampling in numerical
experiments. |
|---|---|
| DOI: | 10.48550/arxiv.2004.11963 |