Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Two Minute Papers: DeepMind’s AlphaStar: A Grandmaster Level StarCraft 2 AI

This site may earn commission on affiliate links.

DeepMind's blog post: AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning

Open access paper in Nature: Grandmaster level in StarCraft II using multi-agent reinforcement learning

I think this work has important implications for the planning component of autonomous driving. It is a remarkable proof of concept of imitation learning and reinforcement learning. A version of AlphaStar trained using learning alone ranked above 84% of human players. When reinforcement learning was added, AlphaStar ranked above 99.8% of human players. But an agent trained with reinforcement learning alone was worse than over 99.5% of human players. This shows how essential it was for DeepMind to bootstrap reinforcement learning with imitation learning.

Unlike autonomous vehicles, AlphaStar has perfect computer vision since it gets information about units and buildings directly from the game state. But it shows that if you abstract away the perception problem, an extremely high degree of competence can be achieved on a complex task with a long time horizon that involves both high-level strategic concepts and moment-to-moment tactical manoeuvres.

I feel optimistic about Tesla's ability to apply imitation learning because it has a large enough fleet of cars with human drivers to achieve an AlphaStar-like scale of training data. The same is true for large-scale real world reinforcement learning. But in order for Tesla to solve planning, it has to solve computer vision. Lately, I feel like computer vision is the most daunting part of the autonomous driving problem. There isn't a proof of concept for computer vision that inspires as much confidence in me as AlphaStar does for planning.