This section covers how to use Verifiers environments for RL training with our Hosted Training platform, our open-source prime-rl trainer, or other supported libraries.
- Hosted Training
- Training with
prime-rl - Prompt Optimization with
prime gepa run - RL Rules of Thumb
- Other Trainers
Hosted Training, available within our Lab platform, enables you to automatically train models via prime-rl without needing to manage your own infrastructure. Hosted Training supports LoRA for RL training, and can be used with any environment built with Verifiers.
Use the prime lab setup script to download example configuration files for Hosted Training into your workspace:
prime lab setupThis will download example TOML configs for Hosted Training into configs/rl/, example eval configs into configs/eval/, along with configs/endpoints.toml and GEPA starter configs in configs/gepa/:
configs/
├── endpoints.toml
├── eval/
│ ├── minimal.toml
│ └── multi-env.toml
├── rl/
│ ├── alphabet-sort.toml
│ ├── gsm8k.toml
│ ├── math-python.toml
│ ├── reverse-text.toml
│ ├── wiki-search.toml
│ └── wordle.toml
└── gepa/
├── base.toml
└── wordle.toml
Example configuration file for the primeintellect/alphabet-sort environment with Qwen/Qwen3-30B-A3B-Instruct-2507:
model = "Qwen/Qwen3-30B-A3B-Instruct-2507"
max_steps = 500
batch_size = 256
rollouts_per_example = 8
[sampling]
max_tokens = 512
[[env]]
id = "primeintellect/alphabet-sort"
args = { min_turns = 3, max_turns = 5, power_per_turn = false }
[wandb]
project = "alphabet-sort"
name = "qwen3-30b-i-alphabet-sort"We currently support the following models for Hosted Training:
Qwen/Qwen3-4B-Instruct-2507Qwen/Qwen3-4B-Thinking-2507Qwen/Qwen3-30B-Instruct-2507Qwen/Qwen3-30B-Thinking-2507Qwen/Qwen3-235B-Instruct-2507Qwen/Qwen3-235B-Thinking-2507PrimeIntellect/INTELLECT-3
Hosted Training is currently in Private Beta. For access, please fill out this form.
Our prime-rl trainer is a production-ready async RL training framework that supports large-scale multi-node training, agentic rollouts with Verifiers environments, Mixture-of-Experts (MoE) models, LoRA adapters, and other training algorithms such as SFT and online distillation. We recommend using prime-rl for training with Verifiers environments on self-managed GPU infrastructure. The default configuration distills the best practices from our research team's experience and the broader community into a stable, easy-to-use recipe, including advanced features such as online difficulty filtering, continuous batching, in-flight weight updates, importance sampling and logprob clipping for stability, and more.
To set up your workspace for training with prime-rl, run:
prime lab setup --prime-rlThis will clone and install the prime-rl trainer and its dependencies, and set up a default TOML config for training with the included wiki-search Environment on 8 GPUs.
Then, you can start training with:
uv run prime-rl configs/prime-rl/wiki-search.tomlThis will launch a tmux session with separate panes for the trainer, orchestrator, and inference server. For further configuration options, see the prime-rl documentation.
prime gepa run is the CLI entrypoint for automatic system prompt optimization using GEPA (Genetic-Pareto prompt optimization). It iteratively refines your environment's system prompt using a teacher LLM to reflect on evaluation results, without requiring gradient-based training. Current support is for system prompt optimization only.
Basic usage mirrors prime eval run:
prime gepa run wiki-search --model google/gemini-3-flash-previewThis will optimize the system prompt for the wiki-search environment using the specified model for both evaluation rollouts and reflection. Results are saved to environments/wiki-search/outputs/gepa/.
Key options:
--model/-m: Model for evaluation rollouts--reflection-model/-M: Teacher model for prompt reflection (defaults to--model)--max-calls/-B: Evaluation budget (default: 500)--num-train/-n: Training examples (default: 100)--num-val/-N: Validation examples (default: 50)--minibatch-size: Number of examples evaluated together per reflection step (default: 3)--perfect-score: Maximum score for a rollout in your environment (if applicable); minibatches achieving this score are skipped during reflection (useful if your environment has a known max score)--state-columns: Additional state columns to copy into the reflection dataset. By default,query,completion,expected_answer,reward, anderrorare included. Use this to add environment-specific state fields (e.g.,--state-columns tool_calls reasoning_trace)
After optimization, you'll find:
best_prompt.txt- The optimized system promptpareto_frontier.jsonl- Best prompts per validation examplemetadata.json- Run configuration and summary
Use prime eval run to verify performance before and after optimization.
RL training can be sensitive to implementation details and hyperparameters. Some simple practical guidance:
- Evaluate baseline performance: If your model gets 0% reward after 10+ attempts, the task is too hard
- Check task difficulty: If baseline is already 80%+, consider harder examples
- Ensure reward diversity: You want varied scores within each generation group
For more aggressive training (higher risk of collapse):
- Increase learning rate (1e-5 to 1e-4 for LoRA, 1e-6 to 1e-5 for full finetuning)
- Decrease
rollouts_per_exampleandbatch_sizefor faster generation
For more stable training (slower progress):
- Increase
rollouts_per_example(16-32) - Increase
batch_size(512-1024) - Use larger models (14B+)
The best way to improve training is to ensure appropriate task difficulty for your model. When using Hosted Training or prime-rl, you can enable online difficulty filtering to ensure that rollout groups used for training always contain a diversity of rewards.
Non-Increasing Chat Templates: The Qwen3 and DeepSeek-R1 model series both remove <think> sections from messages when processing inputs, which violates the increasing context requirement for multi-turn training. We provide versions of many of these models with modified chat templates here.
OOM during generation:
- Reduce
rollouts_per_exampleormicro_batch_size - Use LoRA instead of full finetuning
- Check vLLM server has sufficient memory
Training instability:
- Decrease learning rate
- Increase
rollouts_per_example - Increase
batch_size
Slow training:
- Increase learning rate
- Leverage continuous rewards
- Use online difficulty filtering
- Calibrate difficulty appropriately via smarter models, easier tasks
verifiers is intended to be largely trainer-agnostic and is straightforward to support for any trainer which can expose an OpenAI-compatible inference client for rollouts.
The legacy vf.RLTrainer still exists for educational and experimental purposes via the optional verifiers-rl package and the legacy RL CLI entrypoint, but it is not actively maintained. It is a compact single-node async RL trainer with a narrower feature set than production trainers. Its core implementation (trainer.py and orchestrator.py under packages/verifiers-rl/verifiers_rl/rl/trainer/) remains intentionally lightweight for algorithm experimentation. For production training and current guidance, use prime-rl.
Tinker supports Verifiers environments via the tinker-cookbook recipes.
SkyRL supports Verifiers environments via its skyrl-train integration.
rLLM supports Verifiers environments with both verl (local GPU) and Tinker (remote GPU) backends.