A Bayesian model for plan recognition in RTS games applied to StarCraft. Building a player strategy model by analyzing replays of real-time strategy games. Artificial Intelligence and Interactive Digital Entertainment Conf. Improving Monte Carlo tree search policies in StarCraft via probabilistic models learned from replay data. SparCraft: open source StarCraft combat simulation. Computers and Games 280–291 (Springer, 2002).Ĭhurchill, D. Artificial Intelligence and Interactive Digital Entertainment 106–111 (2009).īuro, M. Case-based reasoning for build order in real-time strategy games. Episodic exploration for deep deterministic policies: an application to StarCraft micromanagement tasks. Autonomous Agents and MultiAgent Systems 2186–2188 (2019). Real-time strategy games: a new AI research challenge. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Computer Vision Pattern Recognition Workshops 16–17 (IEEE, 2017). Curiosity-driven exploration by self-supervised prediction. Human-level control through deep reinforcement learning. Mastering the game of Go with deep neural networks and tree search. The Rating of Chessplayers, Past and Present (Arco, 2017).Ĭampbell, M., Hoane, A. ![]() In-datacenter performance analysis of a tensor processing unit. Fictitious self-play in extensive-form games. Iterative solution of games by fictitious play. Open-ended learning in symmetric zero-sum games. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Learning to predict by the method of temporal differences. Sample efficient actor-critic with experience replay. IMPALA: scalable distributed deep-RL with importance weighted actor-learner architectures. Asynchronous methods for deep reinforcement learning. Discrete sequential prediction of continuous actions for deep RL. Recurrent neural network based language model. Mikolov, T., Karafiat, M., Burget, L., Cernocky, J. StarCraft II: a new challenge for reinforcement learning. ![]() Reinforcement Learning: An Introduction (MIT Press, 1998). in Artificial Intelligence and Interactive Digital Entertainment Conf. An analysis of model-based heuristic search techniques for StarCraft combat scenarios. Student StarCraft AI Tournament and Ladder.Ĭhurchill, D., Lin, Z. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks 5, 6. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. ![]() Over the course of a decade and numerous competitions 1, 2, 3, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems 4. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. Nature volume 575, pages 350–354 ( 2019) Cite this article Grandmaster level in StarCraft II using multi-agent reinforcement learning
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |