Skip to content

CGLemon/Sayuri

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Sayuri Art

Sayuri

Let's ROCK!

Sayuri is a GTP-compliant Go engine built on Deep Convolutional Neural Networks and Monte Carlo Tree Search. It learns to play Go from scratch using an AlphaZero-style algorithm, without any handcrafted human strategies. Inspired heavily by Leela Zero and KataGo, Sayuri initially borrowed its board data structures, search algorithms, and network format from Leela Zero. In later versions, the engine follows KataGo's research and now supports variable rulesets, komi settings, and board sizes.

For development insights and reports, see:

Quick Start via Terminal

To run the engine, you need a executable weights first. The released weights can be got from this page. Then launching the engine with GTP mode via the terminal/PowerShell, using 1 thread and 400 visits per move with optimistic policy. Please type

$ ./sayuri -w <weights file> -t 1 -p 400 --use-optimistic-policy

After executing the command, you'll see diagnostic output. If this output includes Network Version, it indicates that the engine is successfully running in GPT mode. However, since GPT mode isn't designed for human interaction, you should use the graphical interface (GUI) instead. Please refer to the Graphical Interface section for more details.

For a list of additional command-line arguments, use the --help option. Please type

$ ./sayuri --help

The default engine uses a Chinese-like rule, which has a tendency to keep playing to remove some dead stones, even when their ownership of an area is clear. This can lead to unwanted capturing moves. To prevent these unnecessary moves, you have two options. First, while using the Chinese-like rule, add the --friendly-pass option. Second, switch to a Japanese-like rule by using the --scoring-rule territory option.

You can utilize the pure Python engine with a checkpoint model. The released checkpoint models could be found from this page. Although the Python engine is significantly weaker than the C++ engine, it makes running the raw model much easier. More detail you may see here.

$ python3 train/torch/pysayuri.py -c model.pt --use-swa

Execute Engine via Graphical Interface

Sayuri is not complete engine. You need a graphical interface for playing with her. She supports any GTP (version 2) interface application. Sabaki and GoGui are recommended because Sayuri supports some specific analysis commands.

  • Sabaki analysis mode

sabaki-sample01

  • GoGui analysis commands

gogui-sample01

Build From Source

Please see this section. For those on the Windows platform, an executable file can be downloaded directly from the release page.

Reinforcement Learning

Sayuri is a high-efficiency self-play learning system for the game of Go. The accompanying figure illustrates the estimated computational cost of the v0.7 engine (purple line) in comparison to KataGo and LeelaZero. Notably, Sayuri achieves approximately a 250x reduction in computational requirements compared to ELF OpenGo. This was demonstrated by a full training run completed in just three months on a single RTX 4080 GPU. This efficiency significantly surpasses the 50x computational reduction claimed by KataGo g104.

Here will describe how to run the self-play loop.

sayuri-vs-kata

Other Resources

License

The code is released under the GPLv3, except for threadpool.h, cppattributes.h, Eigen and Fast Float, which have specific licenses mentioned in those files.

Contact

[email protected] (Hung-Tse Lin)