This code belongs to this paper 🔗 IMPROVING MONTE CARLO TREE SEARCH BY COMBINING RAVE AND QUALITY-BASED REWARDS ALGORITHMS.
MONTE Carlo Tree Search (MCTS) is a method for finding optimal decisions in a given domain by taking random samples in the decision space and building a search tree according to the results. It has already had a profound impact on Artificial Intelligence (AI) approaches for domains that can be represented as trees of sequential decisions, particularly games and planning problems. In this project I used different simulation strategies to enhance the agent policy to explore the environment.
from 🔗 A Survey of Monte Carlo Tree Search Methods
Before you go through the details, I recommend you to get familiar with the framework reading these medium articles:
- A simple no math included introduction to reinforcement learning
- A simple introduction to Monte Carlo Tree Search
- Details of MCTS implementation on game of HEX
So if you are familiar with the whole concept of MCTS and UCT algorithm, you must know that in practice it suffers from sparse rewards. it takes so much time to warm up the tree with simple UCT algorithm. So in this case we first implemented the RAVE algorithm that helps warm up tree faster. then implemented several simulation strategy like Last Good Reply, PoolRAVE, Decisive Move and also UCB1-Tuned.
Then we applied quality based rewards in Quality-based Rewards for Monte-Carlo Tree Search Simulations which basically it asserts that we can apply discounted rewards by knowing the length of simulation and the maximum number of actions allowed to take in environment for each player (In some games, the game ends after limited number of moves. because there is no more movements).
After that we used HRAVE and GRAVE in the paper Comparison of rapid action value estimation variants for general game playing 2018 - Chiara F. Sironi; Mark H. M. Winands which basically states that we can use the global information of the game to guide the simulations. We also tested the leaf threading on UCT.
all of above algorithms are addressed below.
- MopyHex: Authored by Kenny Young here
- implementing Generalized Rapid Action Value Estimation
- implementing HRAVE and GRAVE algorithms in Comparison of rapid action value estimation variants for general game playing 2018 - Chiara F. Sironi; Mark H. M. Winands
- implementing Quality-based rewards in Quality-based Rewards for Monte-Carlo Tree Search Simulations
- implementing leaf-threading on basic No frills UCT.
This project has a further optimized version in here which optimized by cython.
- Masoud Masoumi Moghadam (Me 😎)
- Prof: Mohammad Pourmahmood Aghababa profile
- Prof: Jamshid Bagherzadeh profile
- OS: Windows and Ubuntu
- tkinter
- Numpy
You can 🏃 (run) program using this command:
python main.py
Also you can run tests for comparing two mcts-based algorithms against
each other using the playtest.py
.
This one is highly recommended:
- Upper Confidence Bounds (UCT)
- UCB1-Tuned
- Rapid Action Value Estimation (RAVE)
- Decisive Move
- Quality Based Rewards
- Pool RAVE
- Last Good Reply
- [1] A Survey of Monte Carlo Tree Search Methods, Cameron B. Browne et al, 2012 Link to paper
- [2] Generalized Rapid Action Value Estimation, Tristan Cazenave, 2017 Link to paper
- [3] Comparison of rapid action value estimation variants for general game playing, C. Sironi, 2018 Link to paper
- [4] Quality-based Rewards for Monte-Carlo Tree Search Simulations, 2014 Link to paper
- [5] The Last-Good-Reply Policy for Monte-Carlo Go 2009 Link to paper
- [6] On the Huge Benefit of Decisive Moves in Monte-Carlo Tree Search Algorithms, Fabien Teytaud, Olivier Teytaud, 2010