The Validator is responsible for generating challenges for the Miner to solve. It receives solutions from Miners, evaluates them, and rewards Miners based on the quality of their solutions. The Validator also calculates rewards based on the correctness and quality of the solutions provided.
Protocol: LogicSynapse
- Validator Prepares:
raw_logic_question
: The math problem generated from MathGenerator.logic_question
: The challenge generated by the Validator. It's rewritten by an LLM fromraw_logic_question
with personalization noise.
- Miner Receives:
logic_question
: The challenge to solve.
- Miner Submits:
logic_reasoning
: Step-by-step reasoning to solve the challenge.logic_answer
: The final answer to the challenge as a short sentence.
Reward Structure:
correctness (bool)
: Validator asks LLM to check iflogic_answer
matches the ground truth.similarity (float)
: Validator computes cosine similarity betweenlogic_reasoning
and the Validator's reasoning.time_penalty (float)
: Penalty for late response, calculated asprocess_time / timeout * MAX_PENALTY
.
There are two ways to run the Validator:
We recommend using Together.AI to run the Validator, as it simplifies setup and reduces local resource requirements.
- Account on Together.AI: Sign up here.
- API Key: Obtain from the Together.AI dashboard.
- Python 3.10
- PM2 Process Manager: For running and managing the Validator process. OPTIONAL
-
Clone the Repository
git clone https://github.com/LogicNet-Subnet/LogicNet logicnet cd logicnet
-
Install the Requirements
python -m venv main . main/bin/activate bash install.sh
Or manually install the requirements:
pip install -e . pip uninstall uvloop -y pip install git+https://github.com/lukew3/mathgenerator.git
-
Register and Obtain API Key
- Visit Together.AI and sign up.
- Obtain your API key from the dashboard.
-
Set Up the
.env
Fileecho "TOGETHER_API_KEY=your_together_ai_api_key" > .env
-
Select a Model Choose a suitable chat or language model from Together.AI:
Model Name Model ID Pricing (per 1M tokens) Qwen 2 Instruct (72B) Qwen/Qwen2-Instruct-72B
$0.90 LLaMA-2 Chat (13B) meta-llama/Llama-2-13b-chat-hf
$0.22 MythoMax-L2 (13B) Gryphe/MythoMax-L2-13B
$0.30 Mistral (7B) Instruct v0.3 mistralai/Mistral-7B-Instruct-v0.3
$0.20 LLaMA-2 Chat (7B) meta-llama/Llama-2-7b-chat-hf
$0.20 Mistral (7B) Instruct mistralai/Mistral-7B-Instruct
$0.20 Qwen 1.5 Chat (72B) Qwen/Qwen-1.5-Chat-72B
$0.90 Mistral (7B) Instruct v0.2 mistralai/Mistral-7B-Instruct-v0.2
$0.20 More models are available here: Together.AI Models
Note: Choose models labeled as
chat
orlanguage
. Avoid image models. -
Install PM2 for Process Management
sudo apt update && sudo apt install jq npm -y sudo npm install pm2 -g pm2 update
-
Run the Validator
- Activate Virtual Environment:
. main/bin/activate
- Source the
.env
File:source .env
- Start the Validator:
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ --netuid 35 \ --wallet.name "your-wallet-name" \ --wallet.hotkey "your-hotkey-name" \ --subtensor.network finney \ --llm_client.base_url https://api.together.xyz/v1 \ --llm_client.model "model_id_from_list" \ --llm_client.key $TOGETHER_API_KEY \ --logging.debug
Replace
"model_id_from_list"
with the Model ID you selected (e.g.,Qwen/Qwen2-Instruct-72B
).
- Activate Virtual Environment:
-
(Optional) Enable Public Access Add the following flag to enable a validator proxy with your public port:
--axon.port "your-public-open-port"
Notes:
- Ensure your
TOGETHER_API_KEY
is correctly set and sourced:- Check the
.env
file:cat .env
- Verify the API key is loaded:
echo $TOGETHER_API_KEY
- Check the
- The
--llm_client.base_url
should behttps://api.together.xyz/v1
. - Match
--llm_client.model
with the Model ID from Together.AI.
- API Documentation: Together.AI Docs
- Support: If you encounter issues, check the validator logs or contact the LogicNet support team.
This method involves self-hosting a vLLM server to run the Validator locally. It requires more resources but provides more control over the environment.
- GPU: 1x GPU with 24GB VRAM (e.g., RTX 4090, A100, A6000)
- Storage: 100GB
- Python: 3.10
-
Set Up vLLM Environment
python -m venv vllm . vllm/bin/activate pip install vllm
-
Install PM2 for Process Management
sudo apt update && sudo apt install jq npm -y sudo npm install pm2 -g pm2 update
-
Select a Model
Supported vLLM Models list can be found here: vLLM Models
-
Start the vLLM Server
. vllm/bin/activate pm2 start "vllm serve "Qwen/Qwen2.5-Math-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm"
Adjust the model, port, and host as needed.
-
Run the Validator with Self-Hosted LLM
- Activate Virtual Environment:
. main/bin/activate
- Start the Validator:
pm2 start python --name "sn35-validator" -- neurons/validator/validator.py \ --netuid 35 \ --wallet.name "your-wallet-name" \ --wallet.hotkey "your-hotkey-name" \ --subtensor.network finney \ --llm_client.base_url http://localhost:8000/v1 \ --llm_client.model Qwen/Qwen2.5-Math-7B-Instruct \ --logging.debug
- Activate Virtual Environment:
-
(Optional) Enable Public Access
--axon.port "your-public-open-port"
-
Logs: Use PM2 to check logs if you encounter issues.
pm2 logs sn35-validator
-
Common Issues:
- API Key Not Found: Ensure
.env
is sourced andTOGETHER_API_KEY
is set. - Model ID Incorrect: Verify the
--llm_client.model
matches the Together.AI Model ID. - Connection Errors: Check internet connectivity and Together.AI service status.
- API Key Not Found: Ensure
-
Contact Support: Reach out to the LogicNet support team for assistance.
Happy Validating!