Skip to content

Commit 6e660aa

Browse files
authored
Merge pull request #67 from wandb/horangi4-dev-vllm-test
Horangi4 dev vllm test
2 parents af3dfa0 + 81764aa commit 6e660aa

File tree

55 files changed

+2358
-163
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+2358
-163
lines changed

CLAUDE.md

Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Repository Overview
6+
7+
This is the HAE-RAE Evaluation Toolkit - a comprehensive framework for evaluating Korean Large Language Models (LLMs). The codebase uses a modular, registry-based architecture where datasets, models, and evaluators are pluggable components registered globally for dynamic loading.
8+
9+
## Key Architecture
10+
11+
The system follows a pipeline architecture: Config → Dataset Loading → Model Inference → Evaluation → Post-Processing → W&B/Weave Logging
12+
13+
Main components:
14+
- **Registry Pattern**: All components (datasets, models, evaluators) use decorators like `@register_dataset()` for automatic registration
15+
- **Config-Driven**: YAML files control the entire pipeline behavior
16+
- **Singleton W&B**: Single W&B run shared across multiple dataset evaluations via `WandbConfigSingleton`
17+
18+
## Essential Commands
19+
20+
### Setup and Installation
21+
```bash
22+
# Install uv if not already installed
23+
curl -LsSf https://astral.sh/uv/install.sh | sh
24+
25+
# Install dependencies
26+
uv sync
27+
28+
# Install with optional vLLM support
29+
uv sync --extra vllm
30+
31+
# Install dev/test tools
32+
uv sync --extra dev,test
33+
```
34+
35+
### Running Evaluations
36+
```bash
37+
# Single model evaluation
38+
uv run python run_eval.py --config gpt-4o-2024-11-20
39+
40+
# Single dataset evaluation
41+
uv run python run_eval.py --dataset kmmlu
42+
43+
# Multiple models by provider
44+
python experiment.py --provider openai
45+
python experiment.py --provider anthropic
46+
47+
# Custom dataset list
48+
uv run python run_eval.py --config claude-sonnet-4-5-20250929 --dataset mt_bench,kmmlu,squad_kor_v1
49+
```
50+
51+
### Testing
52+
```bash
53+
# Run all tests
54+
pytest llm_eval/test/ --cache-clear
55+
56+
# Specific test suites
57+
pytest llm_eval/test/test_datasets.py
58+
pytest llm_eval/test/test_evaluations.py
59+
pytest llm_eval/test/test_scaling.py
60+
61+
# Run single test
62+
pytest llm_eval/test/test_datasets.py::test_dataset_loading[kmmlu]
63+
```
64+
65+
### Code Quality
66+
```bash
67+
# Run pre-commit hooks
68+
pre-commit run --all-files
69+
70+
# Auto-format code (autopep8, line length 80)
71+
autopep8 --in-place --max-line-length=80 <file>
72+
73+
# Sort imports
74+
isort <file>
75+
```
76+
77+
## Core File Structure
78+
79+
Key files and their purposes:
80+
- `run_eval.py` - Main CLI entry point for evaluations
81+
- `experiment.py` - Batch evaluation runner for multiple models
82+
- `configs/base_config.yaml` - Master configuration with dataset settings and model overrides
83+
- `configs/*.yaml` - Individual model configurations (54 models)
84+
- `llm_eval/runner.py` - PipelineRunner class that orchestrates the evaluation pipeline
85+
- `llm_eval/evaluator.py` - High-level Evaluator CLI interface
86+
- `llm_eval/wandb_singleton.py` - Manages shared W&B run across datasets
87+
- `llm_eval/datasets/` - Dataset loaders (37 datasets), each registered with `@register_dataset()`
88+
- `llm_eval/models/` - Model backends (13 implementations), each registered with `@register_model()`
89+
- `llm_eval/evaluation/` - Evaluators (18 scorers), each registered with `@register_evaluator()`
90+
91+
## Configuration System
92+
93+
The system uses a two-level configuration hierarchy:
94+
95+
1. **base_config.yaml**: Contains global settings, dataset configurations, and evaluation methods
96+
2. **Model configs** (e.g., `gpt-4o-2024-11-20.yaml`): Model-specific settings that can override base config
97+
98+
Dataset configuration in base_config.yaml:
99+
```yaml
100+
dataset_name:
101+
split: train/test/validation
102+
subset: [list_of_subsets] # Optional
103+
params:
104+
num_samples: N # Number of samples to evaluate
105+
limit: M # Number of batches (optional)
106+
evaluation:
107+
method: evaluator_name
108+
params: {...} # Evaluator-specific parameters
109+
model_params: {...} # Optional model parameter overrides
110+
```
111+
112+
## Adding New Components
113+
114+
### New Dataset
115+
1. Create class in `llm_eval/datasets/` extending `BaseDataset`
116+
2. Implement `load()` method returning `List[Dict[str, Any]]` with keys: `instruction`, `reference_answer`, etc.
117+
3. Add `@register_dataset("name")` decorator
118+
4. Add configuration to `base_config.yaml`
119+
5. Add test in `test_datasets.py`
120+
121+
### New Evaluator
122+
1. Create class in `llm_eval/evaluation/` extending `BaseEvaluator`
123+
2. Implement `score(predictions, references, **kwargs)` method
124+
3. Add `@register_evaluator("name")` decorator
125+
4. Reference in dataset config's `evaluation.method`
126+
5. Add test in `test_evaluations.py`
127+
128+
### New Model Backend
129+
1. Create class in `llm_eval/models/` extending `BaseModel`
130+
2. Implement `generate_batch(prompts, **kwargs)` method
131+
3. Add `@register_model("name")` decorator
132+
4. Create model config YAML in `configs/`
133+
5. Set required API keys in `.env`
134+
135+
## Environment Variables
136+
137+
Required API keys in `.env`:
138+
- `OPENAI_API_KEY` - For OpenAI models
139+
- `ANTHROPIC_API_KEY` - For Claude models
140+
- `WANDB_API_KEY` - For W&B logging
141+
- `GOOGLE_API_KEY` - For Gemini models
142+
143+
See `.env.example` for complete list of 50+ optional provider keys.
144+
145+
## W&B Integration
146+
147+
The system uses a singleton pattern for W&B runs:
148+
- One run encompasses all dataset evaluations for a model
149+
- Results are logged to tables and artifacts
150+
- Leaderboard tables are automatically generated
151+
- Configuration: `wandb.params` in base_config.yaml sets entity/project
152+
153+
## Common Debugging
154+
155+
```python
156+
# Debug single evaluation
157+
from llm_eval.runner import PipelineRunner, PipelineConfig
158+
159+
config = PipelineConfig(
160+
dataset_name='kmmlu',
161+
subset=['Chemistry'],
162+
model_backend_name='openai',
163+
model_backend_params={'model_name': 'gpt-4o'}
164+
)
165+
runner = PipelineRunner(config)
166+
result = runner.run()
167+
print(result.metrics)
168+
```
169+
170+
## Performance Considerations
171+
172+
- Dataset loading uses caching when possible
173+
- API calls respect `inference_interval` to avoid rate limits
174+
- Batch sizes are configurable per model/dataset
175+
- vLLM backend supports auto server management for local models
176+
- Test mode (`testmode: true`) reduces sample counts for quick testing
177+
178+
## Current Development Status
179+
180+
- Active branch: horangi4-dev
181+
- Python 3.10+ required
182+
- Package manager: uv (ultra-fast Python package manager)
183+
- CI/CD: GitHub Actions running tests on Python 3.10 and 3.12
184+
- Pre-commit hooks enforce code quality (flake8, autopep8, isort, mypy)

configs/Qwen3-0.6B.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-0.6B"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-0.6B
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-0.6B"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 751632384
26+
size_category: "Small (<10B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-0.6B-FC

configs/Qwen3-1.7B.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-1.7B"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-1.7B
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-1.7B"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 2031739904
26+
size_category: "Small (<10B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-1.7B-FC

configs/Qwen3-14B.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-14B"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-14B
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-14B"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 14768307200
26+
size_category: "Medium (10–30B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-14B-FC
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-4B-Instruct-2507"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-4B-Instruct-2507
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-4B-Instruct-2507"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 4022468096
26+
size_category: "Small (<10B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-4B-Instruct-2507-FC

configs/Qwen3-4B.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-4B"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-4B
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-4B"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 4022468096
26+
size_category: "Small (<10B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-4B-FC

configs/Qwen3-8B.yaml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
wandb:
2+
params:
3+
run_name: "Qwen3-8B"
4+
5+
model:
6+
name: litellm
7+
params:
8+
model_name: Qwen/Qwen3-8B
9+
provider: hosted_vllm
10+
api_base: http://localhost:8010/v1
11+
batch_size: 8
12+
max_tokens: 16384
13+
temperature: 0.1
14+
vllm_params:
15+
batch_size: 16
16+
dtype: "auto"
17+
download_dir: "/workspace/huggingface/hub"
18+
max_model_len: 16384
19+
num_gpus: 1
20+
port: 8010
21+
pretrained_model_name_or_path: "Qwen/Qwen3-8B"
22+
tensor_parallel_size: 1
23+
trust_remote_code: true
24+
release_date: "2025-03-06"
25+
model_size: 8190735360
26+
size_category: "Small (<10B)"
27+
28+
bfcl:
29+
model_params:
30+
max_tokens: 16384
31+
temperature: 0.1
32+
model_name: Qwen3-8B-FC

0 commit comments

Comments
 (0)