- [2025/12] Oumi v0.6.0 released with Python 3.13 support,
oumi analyzeCLI command, TRL 0.26+ support, and more - [2025/12] WeMakeDevs AI Agents Assemble Hackathon: Oumi webinar on Finetuning for Text-to-SQL
- [2025/12] Oumi co-sponsors WeMakeDevs AI Agents Assemble Hackathon with over 2000 project submissions
- [2025/11] Oumi v0.5.0 released with advanced data synthesis, hyperparameter tuning automation, support for OpenEnv, and more
- [2025/11] Example notebook to perform RLVF fine-tuning with OpenEnv, an open source library from the Meta PyTorch team for creating, deploying, and distributing agentic RL environments
- [2025/10] Oumi v0.4.1 and v0.4.2 released] with support for Qwen3-VL and Transformers v4.56, data synthesis documentation and examples, and many bug fixes
- [2025/09] Oumi v0.4.0 released with DeepSpeed support, a Hugging Face Hub cache management tool, KTO/Vision DPO trainer support
- [2025/08] Training and inference support for OpenAI's
gpt-oss-20bandgpt-oss-120b: recipes here - [2025/08] Aug 14 Webinar - OpenAI's gpt-oss: Separating the Substance from the Hype.
- [2025/08] Oumi v0.3.0 released with model quantization (AWQ), an improved LLM-as-a-Judge API, and Adaptive Inference
- [2025/07] Recipe for Qwen3 235B
- [2025/07] July 24 webinar: "Training a State-of-the-art Agent LLM with Oumi + Lambda"
- [2025/06] Oumi v0.2.0 released with support for GRPO fine-tuning, a plethora of new model support, and much more
- [2025/06] Announcement of Data Curation for Vision Language Models (DCVLR) competition at NeurIPS2025
- [2025/06] Recipes for training, inference, and eval with the newly released Falcon-H1 and Falcon-E models
- [2025/05] Support and recipes for InternVL3 1B
- [2025/04] Added support for training and inference with Llama 4 models: Scout (17B activated, 109B total) and Maverick (17B activated, 400B total) variants, including full fine-tuning, LoRA, and QLoRA configurations
- [2025/04] Recipes for Qwen3 model family
- [2025/04] Introducing HallOumi: a State-of-the-Art Claim-Verification Model (technical overview)
- [2025/04] Oumi now supports two new Vision-Language models: Phi4 and Qwen 2.5
Oumi is a fully open-source platform that streamlines the entire lifecycle of foundation models - from data preparation and training to evaluation and deployment. Whether you're developing on a laptop, launching large scale experiments on a cluster, or deploying models in production, Oumi provides the tools and workflows you need.
With Oumi, you can:
- 🚀 Train and fine-tune models from 10M to 405B parameters using state-of-the-art techniques (SFT, LoRA, QLoRA, GRPO, and more)
- 🤖 Work with both text and multimodal models (Llama, DeepSeek, Qwen, Phi, and others)
- 🔄 Synthesize and curate training data with LLM judges
- ⚡️ Deploy models efficiently with popular inference engines (vLLM, SGLang)
- 📊 Evaluate models comprehensively across standard benchmarks
- 🌎 Run anywhere - from laptops to clusters to clouds (AWS, Azure, GCP, Lambda, and more)
- 🔌 Integrate with both open models and commercial APIs (OpenAI, Anthropic, Vertex AI, Together, Parasail, ...)
All with one consistent API, production-grade reliability, and all the flexibility you need for research.
Learn more at oumi.ai, or jump right in with the quickstart guide.
Choose the installation method that works best for you:
Using pip (Recommended)
# Basic installation
uv pip install oumi
# With GPU support
uv pip install 'oumi[gpu]'
# Latest development version
uv pip install git+https://github.com/oumi-ai/oumi.gitDon't have uv? Install it or use pip instead.
Using Docker
# Pull the latest image
docker pull ghcr.io/oumi-ai/oumi:latest
# Run oumi commands
docker run --gpus all -it ghcr.io/oumi-ai/oumi:latest oumi --help
# Train with a mounted config
docker run --gpus all -v $(pwd):/workspace -it ghcr.io/oumi-ai/oumi:latest \
oumi train --config /workspace/my_config.yamlQuick Install Script (Experimental)
Try Oumi without setting up a Python environment. This installs Oumi in an isolated environment:
curl -LsSf https://oumi.ai/install.sh | bashFor more advanced installation options, see the installation guide.
You can quickly use the oumi command to train, evaluate, and infer models using one of the existing recipes:
# Training
oumi train -c configs/recipes/smollm/sft/135m/quickstart_train.yaml
# Evaluation
oumi evaluate -c configs/recipes/smollm/evaluation/135m/quickstart_eval.yaml
# Inference
oumi infer -c configs/recipes/smollm/inference/135m_infer.yaml --interactiveFor more advanced options, see the training, evaluation, inference, and llm-as-a-judge guides.
You can run jobs remotely on cloud platforms (AWS, Azure, GCP, Lambda, etc.) using the oumi launch command:
# GCP
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml
# AWS
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud aws
# Azure
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud azure
# Lambda
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml --resources.cloud lambdaNote: Oumi is in beta and under active development. The core features are stable, but some advanced features might change as the platform improves.
If you need a comprehensive platform for training, evaluating, or deploying models, Oumi is a great choice.
Here are some of the key features that make Oumi stand out:
- 🔧 Zero Boilerplate: Get started in minutes with ready-to-use recipes for popular models and workflows. No need to write training loops or data pipelines.
- 🏢 Enterprise-Grade: Built and validated by teams training models at scale
- 🎯 Research Ready: Perfect for ML research with easily reproducible experiments, and flexible interfaces for customizing each component.
- 🌐 Broad Model Support: Works with most popular model architectures - from tiny models to the largest ones, text-only to multimodal.
- 🚀 SOTA Performance: Native support for distributed training techniques (FSDP, DeepSpeed, DDP) and optimized inference engines (vLLM, SGLang).
- 🤝 Community First: 100% open source with an active community. No vendor lock-in, no strings attached.
Explore the growing collection of ready-to-use configurations for state-of-the-art models and training workflows:
Note: These configurations are not an exhaustive list of what's supported, simply examples to get you started. You can find a more exhaustive list of supported models, and datasets (supervised fine-tuning, pre-training, preference tuning, and vision-language finetuning) in the oumi documentation.
| Model | Example Configurations |
|---|---|
| Qwen3-Next 80B A3B | LoRA • Inference • Inference (Instruct) • Evaluation |
| Qwen3 30B A3B | LoRA • Inference • Evaluation |
| Qwen3 32B | LoRA • Inference • Evaluation |
| Qwen3 14B | LoRA • Inference • Evaluation |
| Qwen3 8B | FFT • Inference • Evaluation |
| Qwen3 4B | FFT • Inference • Evaluation |
| Qwen3 1.7B | FFT • Inference • Evaluation |
| Qwen3 0.6B | FFT • Inference • Evaluation |
| QwQ 32B | FFT • LoRA • QLoRA • Inference • Evaluation |
| Qwen2.5-VL 3B | SFT • LoRA• Inference (vLLM) • Inference |
| Qwen2-VL 2B | SFT • LoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
| Model | Example Configurations |
|---|---|
| DeepSeek R1 671B | Inference (Together AI) |
| Distilled Llama 8B | FFT • LoRA • QLoRA • Inference • Evaluation |
| Distilled Llama 70B | FFT • LoRA • QLoRA • Inference • Evaluation |
| Distilled Qwen 1.5B | FFT • LoRA • Inference • Evaluation |
| Distilled Qwen 32B | LoRA • Inference • Evaluation |
| Model | Example Configurations |
|---|---|
| Llama 4 Scout Instruct 17B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference • Inference (Together.ai) |
| Llama 4 Scout 17B | FFT |
| Llama 3.1 8B | FFT • LoRA • QLoRA • Pre-training • Inference (vLLM) • Inference • Evaluation |
| Llama 3.1 70B | FFT • LoRA • QLoRA • Inference • Evaluation |
| Llama 3.1 405B | FFT • LoRA • QLoRA |
| Llama 3.2 1B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
| Llama 3.2 3B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
| Llama 3.3 70B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference • Evaluation |
| Llama 3.2 Vision 11B | SFT • Inference (vLLM) • Inference (SGLang) • Evaluation |
| Model | Example Configurations |
|---|---|
| Falcon-H1 | FFT • Inference • Evaluation |
| Falcon-E (BitNet) | FFT • DPO • Evaluation |
| Model | Example Configurations |
|---|---|
| Gemma 3 4B Instruct | FFT • Inference • Evaluation |
| Gemma 3 12B Instruct | LoRA • Inference • Evaluation |
| Gemma 3 27B Instruct | LoRA • Inference • Evaluation |
| Model | Example Configurations |
|---|---|
| OLMo 3 7B Instruct | FFT • Inference • Evaluation |
| OLMo 3 32B Instruct | LoRA • Inference • Evaluation |
| Model | Example Configurations |
|---|---|
| Llama 3.2 Vision 11B | SFT • LoRA • Inference (vLLM) • Inference (SGLang) • Evaluation |
| LLaVA 7B | SFT • Inference (vLLM) • Inference |
| Phi3 Vision 4.2B | SFT • LoRA • Inference (vLLM) |
| Phi4 Vision 5.6B | SFT • LoRA • Inference (vLLM) • Inference |
| Qwen2-VL 2B | SFT • LoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
| Qwen3-VL 2B | Inference |
| Qwen3-VL 4B | Inference |
| Qwen3-VL 8B | Inference |
| Qwen2.5-VL 3B | SFT • LoRA• Inference (vLLM) • Inference |
| SmolVLM-Instruct 2B | SFT • LoRA |
This section lists all the language models that can be used with Oumi. Thanks to the integration with the 🤗 Transformers library, you can easily use any of these models for training, evaluation, or inference.
Models prefixed with a checkmark (✅) have been thoroughly tested and validated by the Oumi community, with ready-to-use recipes available in the configs/recipes directory.
📋 Click to see more supported models
| Model | Size | Paper | HF Hub | License | Open 1 |
|---|---|---|---|---|---|
| ✅ SmolLM-Instruct | 135M/360M/1.7B | Blog | Hub | Apache 2.0 | ✅ |
| ✅ DeepSeek R1 Family | 1.5B/8B/32B/70B/671B | Blog | Hub | MIT | ❌ |
| ✅ Llama 3.1 Instruct | 8B/70B/405B | Paper | Hub | License | ❌ |
| ✅ Llama 3.2 Instruct | 1B/3B | Paper | Hub | License | ❌ |
| ✅ Llama 3.3 Instruct | 70B | Paper | Hub | License | ❌ |
| ✅ Phi-3.5-Instruct | 4B/14B | Paper | Hub | License | ❌ |
| ✅ Qwen3 | 0.6B-32B | Paper | Hub | License | ❌ |
| Qwen2.5-Instruct | 0.5B-70B | Paper | Hub | License | ❌ |
| OLMo 2 Instruct | 7B | Paper | Hub | Apache 2.0 | ✅ |
| ✅ OLMo 3 Instruct | 7B/32B | Paper | Hub | Apache 2.0 | ✅ |
| MPT-Instruct | 7B | Blog | Hub | Apache 2.0 | ✅ |
| Command R | 35B/104B | Blog | Hub | License | ❌ |
| Granite-3.1-Instruct | 2B/8B | Paper | Hub | Apache 2.0 | ❌ |
| Gemma 2 Instruct | 2B/9B | Blog | Hub | License | ❌ |
| ✅ Gemma 3 Instruct | 4B/12B/27B | Blog | Hub | License | ❌ |
| DBRX-Instruct | 130B MoE | Blog | Hub | Apache 2.0 | ❌ |
| Falcon-Instruct | 7B/40B | Paper | Hub | Apache 2.0 | ❌ |
| ✅ Llama 4 Scout Instruct | 17B (Activated) 109B (Total) | Paper | Hub | License | ❌ |
| ✅ Llama 4 Maverick Instruct | 17B (Activated) 400B (Total) | Paper | Hub | License | ❌ |
| Model | Size | Paper | HF Hub | License | Open |
|---|---|---|---|---|---|
| ✅ Llama 3.2 Vision | 11B | Paper | Hub | License | ❌ |
| ✅ LLaVA-1.5 | 7B | Paper | Hub | License | ❌ |
| ✅ Phi-3 Vision | 4.2B | Paper | Hub | License | ❌ |
| ✅ BLIP-2 | 3.6B | Paper | Hub | MIT | ❌ |
| ✅ Qwen2-VL | 2B | Blog | Hub | License | ❌ |
| ✅ Qwen3-VL | 2B/4B/8B | Blog | Hub | License | ❌ |
| ✅ SmolVLM-Instruct | 2B | Blog | Hub | Apache 2.0 | ✅ |
| Model | Size | Paper | HF Hub | License | Open |
|---|---|---|---|---|---|
| ✅ SmolLM2 | 135M/360M/1.7B | Blog | Hub | Apache 2.0 | ✅ |
| ✅ Llama 3.2 | 1B/3B | Paper | Hub | License | ❌ |
| ✅ Llama 3.1 | 8B/70B/405B | Paper | Hub | License | ❌ |
| ✅ GPT-2 | 124M-1.5B | Paper | Hub | MIT | ✅ |
| DeepSeek V2 | 7B/13B | Blog | Hub | License | ❌ |
| Gemma2 | 2B/9B | Blog | Hub | License | ❌ |
| GPT-J | 6B | Blog | Hub | Apache 2.0 | ✅ |
| GPT-NeoX | 20B | Paper | Hub | Apache 2.0 | ✅ |
| Mistral | 7B | Paper | Hub | Apache 2.0 | ❌ |
| Mixtral | 8x7B/8x22B | Blog | Hub | Apache 2.0 | ❌ |
| MPT | 7B | Blog | Hub | Apache 2.0 | ✅ |
| OLMo | 1B/7B | Paper | Hub | Apache 2.0 | ✅ |
| ✅ Llama 4 Scout | 17B (Activated) 109B (Total) | Paper | Hub | License | ❌ |
| Model | Size | Paper | HF Hub | License | Open |
|---|---|---|---|---|---|
| ✅ gpt-oss | 20B/120B | Paper | Hub | Apache 2.0 | ❌ |
| ✅ Qwen3 | 0.6B-32B | Paper | Hub | License | ❌ |
| ✅ Qwen3-Next | 80B-A3B | Blog | Hub | License | ❌ |
| Qwen QwQ | 32B | Blog | Hub | License | ❌ |
| Model | Size | Paper | HF Hub | License | Open |
|---|---|---|---|---|---|
| ✅ Qwen2.5 Coder | 0.5B-32B | Blog | Hub | License | ❌ |
| DeepSeek Coder | 1.3B-33B | Paper | Hub | License | ❌ |
| StarCoder 2 | 3B/7B/15B | Paper | Hub | License | ✅ |
| Model | Size | Paper | HF Hub | License | Open |
|---|---|---|---|---|---|
| DeepSeek Math | 7B | Paper | Hub | License | ❌ |
To learn more about all the platform's capabilities, see the Oumi documentation.
Oumi is a community-first effort. Whether you are a developer, a researcher, or a non-technical user, all contributions are very welcome!
- To contribute to the
oumirepository, please check theCONTRIBUTING.mdfor guidance on how to contribute to send your first Pull Request. - Make sure to join our Discord community to get help, share your experiences, and contribute to the project!
- If you are interested in joining one of the community's open-science efforts, check out our open collaboration page.
Oumi makes use of several libraries and tools from the open-source community. We would like to acknowledge and deeply thank the contributors of these projects! ✨ 🌟 💫
If you find Oumi useful in your research, please consider citing it:
@software{oumi2025,
author = {Oumi Community},
title = {Oumi: an Open, End-to-end Platform for Building Large Foundation Models},
month = {January},
year = {2025},
url = {https://github.com/oumi-ai/oumi}
}This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Footnotes
-
Open models are defined as models with fully open weights, training code, and data, and a permissive license. See Open Source Definitions for more information. ↩
