Skip to content

Conversation

Sameerlite
Copy link
Collaborator

@Sameerlite Sameerlite commented Oct 13, 2025

Title

Add OpenAI Videos API endpoint support with Sora integration

Relevant issues

Fixes LIT-1258

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Overview

This PR adds comprehensive support for OpenAI's Videos API endpoints, enabling video generation using Sora and Sora-2 models through LiteLLM's unified interface.

Key Features Added

1. Video Endpoints Implementation

  • New module: litellm/proxy/openai_videos_endpoints/
    • video_endpoints.py: FastAPI endpoints for all video operations
    • __init__.py: Module initialization
  • Endpoints implemented:
    • POST /v1/videos - Create video
    • GET /v1/videos/{video_id} - Retrieve video details
    • GET /v1/videos/{video_id}/content - Download video content
    • DELETE /v1/videos/{video_id} - Delete video
    • GET /v1/videos - List videos

2. Core Video Logic

  • New module: litellm/videos/
    • main.py: Core video generation and management functions
    • __init__.py: Module exports
  • Functions implemented:
    • create_video() / acreate_video() - Generate videos
    • video_retrieve() / avideo_retrieve() - Get video details
    • video_content() / avideo_content() - Download video content
    • video_delete() / avideo_delete() - Delete videos
    • video_list() / avideo_list() - List videos

3. OpenAI Integration

  • Enhanced: litellm/llms/openai/openai.py
    • Added OpenAIVideosAPI class for direct HTTP requests
    • Support for multipart/form-data requests with input_reference parameter
    • Proper error handling and response parsing

4. Cost Calculation Integration

  • Enhanced: litellm/cost_calculator.py
    • Added video generation cost calculation logic
    • Support for input_cost_per_video_per_second pricing model
  • Enhanced: litellm/litellm_core_utils/llm_cost_calc/utils.py
    • Added cost_per_video_per_second metric support
  • Updated: Model cost maps with Sora pricing:
    • sora-2: $0.10 per second
    • sora-2-pro: $0.30 per second

Technical Details

API Compatibility

  • Full OpenAI Videos API compatibility
  • Support for all Sora model parameters (prompt, model, seconds, size)
  • Optional input_reference parameter for image-guided video generation
  • Proper multipart/form-data handling for file uploads

Cost Management

  • Automatic cost calculation based on video duration and model
  • Integration with LiteLLM's existing cost tracking system
  • Support for budget limiting and usage monitoring

Usage Example

# Create a video
response = litellm.acreate_video(
    prompt="A cat playing with yarn",
    model="sora-2",
    seconds="4",
    size="720x1280"
)

# With input reference image
response = litellm.acreate_video(
    prompt="Continue this scene",
    model="sora-2-pro", 
    input_reference=("start_frame.jpg", image_bytes, "image/jpeg")
)

This implementation provides a complete, production-ready solution for OpenAI video generation through LiteLLM's unified interface.

Copy link

vercel bot commented Oct 13, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Oct 13, 2025 7:52am

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • PR 1 add on SDK level
  • Please make sure to follow the same structure as openai/image_generations

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants