High-Dimensional Prediction for Sequential Decision Making
A Google TechTalk, presented by Georgy Noarov, 2023-11-16
Google Algorithms Seminar - ABSTRACT: In predictions-to-decisions pipelines, statistical forecasts are useful insofar as they are a trustworthy guide to downstream rational decision making.
Towards this, we study the problem of making online predictions of an adversarially chosen high-dimensional state that are \emph{unbiased} subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers who will use these predictions to inform their actions. We give efficient general algorithms for solving this problem, and a number of applications that stem from choosing an appropriate set of conditioning events, including:
(1) tractable algorithms with strong no-regret guarantees over large action spaces, (2) a high-dimensional best-in-class (omniprediction) result, (3) fairness guarantees of various flavors, and (4) a novel framework for uncertainty quantification in multiclass settings.
For example, we can efficiently produce predictions targeted at any polynomially many decision makers, offering each of them optimal swap regret if they simply best respond to our predictions. Generalizing this, in the online combinatorial optimization setting we obtain the first efficient algorithms that guarantee, for up to polynomially many decision makers, no regret on any polynomial number of {subsequences} that can depend on their actions as well as any external context. As an illustration, for the online routing problem this easily implies --- for the first time --- efficiently obtainable no-swap-regret guarantees over all exponentially many paths that make up an agent’s action space (and this can be obtained for multiple agents at once); by contrast, prior no swap regret algorithms such as Blum-Mansour would be intractable in this setting as they need to enumerate over the exponentially large action space. Moreover, our results imply novel regret guarantees for extensive-form games.
Turning to uncertainty quantification in ML, we show how our methods let us estimate (in the online adversarial setting) multiclass/multilabel probability vectors in a transparent and trustworthy fashion: in particular, downstream prediction set algorithms (i.e., models that predict multiple labels at once rather than a single one) will be incentivized to simply use our estimated probabilities as if they were the true conditional class probabilities, and their predictions will be guaranteed to satisfy multigroup fairness and other “conditional coverage“ guarantees. This gives a powerful new alternative to well-known set-valued prediction paradigms such as conformal and top-K prediction. Moreover, our predictions can be guaranteed to be “best-in-class“ --- i.e. to beat any polynomial collection of other (e.g. NN-based) multiclass vector predictors, simultaneously as measured by all Lipschitz Bregman loss functions (including L2 loss, cross-entropy loss, etc.). This can be interpreted as a high-dimensional omniprediction result.
ABOUT THE SPEAKER: George Noarov is a CS PhD student at the University of Pennsylvania, advised by Michael Kearns and Aaron Roth. He is broadly interested in theoretical CS and ML, with particular focus on fair/robust/trustworthy ML, online learning, algorithmic game theory, and uncertainty quantification. He obtained his B.A. in Mathematics from Princeton University, where his advisors were Mark Braverman and Matt Weinberg. He has received several academic awards, and his research has been supported by an Amazon Gift for Research in Trustworthy AI.
1 view
0
0
9 months ago 00:03:51 1
Makeup Lamps: Live Augmentation of Human Faces via Projection
10 months ago 02:06:38 1
Deep Learning is a strange beast.
12 months ago 00:49:05 1
High-Dimensional Prediction for Sequential Decision Making
1 year ago 00:43:00 1
The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever | Eye on AI #118
1 year ago 02:52:27 1
Hands on Lumped Parameter Models with Workshop | JuliaCon 2023
2 years ago 00:03:15 1
Terrain-Adaptive, ALIP-Based Bipedal Locomotion Controller via MPC and Virtual Constraints-Extended
3 years ago 00:50:39 3
#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data [UNPLUGGED]
3 years ago 00:01:01 1
Terrain-Adaptive, ALIP-Based Bipedal Locomotion Controller via MPC and Virtual Constraints
3 years ago 00:56:59 23
DL in Production
3 years ago 00:24:20 1
AIQC; Deep Learning Experiment | PyData Global 2021
4 years ago 00:07:12 1
Big Data Center and Renal Medicine at China Medical University, Taiwan
4 years ago 01:09:13 7
[HDI Lab seminar] Exponential Savings in Agnostic Active Learning through Abstention
4 years ago 01:01:18 1
Machine Learning for Forecasting Global Atmospheric Models | AISC
5 years ago 01:08:19 11
Deep Learning: A Bayesian Perspective by Dr. Vadim Sokolov
5 years ago 00:51:39 16
Pratyush Tiwary: “Learning to learn, learning to forget“