Fresh from conquering the world of Go with its AlphaGo and AlphaGo Zero artificial intelligences, Google's DeepMind division has scored another win over humanity - this time in real-time strategy (RTS) title StarCraft II.
A long-standing staple of competitive gaming, Blizzard's StarCraft and its sequel took the company's Warcraft fantasy real-time strategy (RTS) franchise to the stars with a sci-fi theme that took the world by storm. Even nine years after its release, StarCraft II draws crowds for its tournaments - but a recent bout had a difference, pitting human players against an AI developed by Google's DeepMind arm.
Dubbed AlphaStar, DeepMind's latest game-playing AI saw an early outing in December last year when it beat Grzegorz 'MaNa' Komincz and fellow StarCraft expert Dario 'TLO' Wünsch 5-0 apiece - though a final live exhibition match saw AlphaStar bested for a total of 10-1.
Although the matches themselves took place in December, DeepMind has only now released full replays and an analysis of how the AlphaStar AI worked to beat its opponents. In a detailed write-up published this week, DeepMind explains that AlphaStar is based on a deep neural network (DNN) which is fed from raw input data straight from the game, alongside a novel multi-agent learning algorithm which was trained by watching game replays released by Blizzard - enough on its own, the company claims, to beat the game's in-built AI on 'Elite' difficulty in around 95 percent of matches.
The data generated from the first rounds of training were then used to seed a multi-agent reinforcement learning process, pitting the AI against itself in an internal league - a form, DeepMind explains, of population-based reinforcement learning. This process was accelerated using Google's in-house Tensor Processing Units (TPUs), custom-built hardware designed for deep-learning acceleration: 'The AlphaStar league was run for 14 days, using 16 TPUs for each agent,' DeepMind explains. 'During training, each agent experienced up to 200 years of real-time StarCraft play.'
'I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn't have expected,' says Komincz of his opponent. 'I've realised how much my gameplay relies on forcing mistakes and being able to exploit human reactions, so this has put the game in a whole new light for me. We’re all excited to see what comes next.'
DeepMind's full write-up, along with download links for the replays, is available on the company's official website. A more detailed paper on the project is to be published in a peer-reviewed journal in the near future, the company has confirmed.
October 14 2021 | 15:04
Want to comment? Please log in.