๐ Click to watch full video
Your hardware requirements depend on the swarm type and model size you choose.
- For low hardware, use Qwen 0.5B / 1.5B + GSM8K
- For high hardware, use Qwen 7B / 32B / 72B + DAPO-Math 17K
Choose according to the details below:
For users with mid-level hardware.
- 32 GB RAM
- 8 vCPU
- 200 GB Storage
- Best for default (0.5B) training
- ARM64 or x86_64 CPU
- Minimum 32GB RAM
โ ๏ธ Note: Running extra applications during training may cause a crash.
- RTX 3090
- RTX 4090
- A100
- H100
For users with powerful hardware.
- NVIDIA A100 (80GB)
- NVIDIA H100 (80GB)
Connect to your server:
ssh username@ipA complete step-by-step guide to run RL-SWARM on Linux (VPS/WSL) or Mac.
Run the following commands step-by-step on your terminal.
- Install
sudo
apt update && apt install -y sudo- Install other dependencies
sudo apt update && sudo apt install -y python3 python3-venv python3-pip curl wget screen git lsof nano unzip iproute2 build-essential gcc g++- Create a
screensession
screen -S gensyn- Clone official
rl-swarmrepo
git clone https://github.com/gensyn-ai/rl-swarm.git && cd rl-swarm
- Run the swarm
python3 -m venv .venv
. .venv/bin/activate
./run_rl_swarm.sh
- After sometimes, u will see something like this if your running it on Linux system, so here follow the next step
-
- Now it will prompt you to login: Follow: 1๏ธโฃ How to Login or access http://localhost:3000/ in VPS? ๐ถ
-
Now It will promt
Would you like to push models you train in the RL swarm to the Hugging Face Hub? [y/N]EnterN -
Now It will promt
>> Enter the name of the model you want to use in huggingface repo/name format, or press [Enter] to use the default model.pressEnter& get defalut model:
---โ If U put model manually then it can be cause of terminatedโ--- So better to use Default:
------>>>If u want to select the model then choose between them:
Gensyn/Qwen2.5-0.5B-Instruct Qwen/Qwen3-0.6B nvidia/AceInstruct-1.5B dnotitia/Smoothie-Qwen3-1.7B Gensyn/Qwen2.5-1.5B-Instruct
Models (CodeZero):
- Qwen/Qwen2.5-Coder-0.5B-Instruct โ Solver role (Recommended Choose This)
- Qwen/Qwen2.5-Coder-1.5B-Instruct โ Evaluator (frozen)
Currently, we are running CodeZero on the Gensyn Testnet.
Its Done โ
ctrl A D To Detach from screen
- Attach with previous screen:
screen -r gensyn
1๏ธโฃ How to Login or access http://localhost:3000/ in VPS? ๐ถ
-
Open a new Terminal and login ur vps
-
Allow Incoming connection on VPS
sudo apt install ufw -y
sudo ufw allow 22
sudo ufw allow 3000/tcp
- Enable ufw
sudo ufw enable
- Install cloudflared on the VPS
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb
- Check version
cloudflared --version
-
Make sure your Node is running on port 3000 in Previous Screen
-
Run the tunnel command
cloudflared tunnel --url http://localhost:3000
- RUN THIS CMD : (Must you should be in rl-swarm directory) โ
cd ~/rl-swarm
pkill -f python
rm -rf /tmp/hivemind-*
rm -rf /tmp/p2pd-*
source .venv/bin/activate
bash run_rl_swarm.sh๐ Join X for more updates: https://x.com/Hamad__Alpha
๐ฌ If you have any issue: Open an issue on this repo or DM me on X:https://x.com/Hamad__AlphaThank you! Best of luck ๐


