Dreamer v2: Mastering Atari with Discrete World Models (Machine Learning Research Paper Explained)

#dreamer #deeprl #reinforcementlearning Model-Based Reinforcement Learning has been lagging behind Model-Free RL on Atari, especially among single-GPU algorithms. This collaboration between Google AI, DeepMind, and the University of Toronto (UofT) pushes world models to the next level. The main contribution is a learned latent state consisting of one discrete part and one stochastic part, whereby the stochastic part is a set of 32 categorical variables, each with 32 possible values. The world model can freely decide how it wants to use these variables to represent the input, but is tasked with the prediction of future observations and rewards. This procedure gives rise to an informative latent representation and in a second step, reinforcement learning (A2C Actor-Critic) can be done purely - and very efficiently - on the basis of the world-model’s latent states. No observations needed! This paper combines this with straight-through estimators, KL balancing, and many other tricks to achieve state-of-the-art s
Back to Top