Skip to content

/v1/embedding support #1221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

dittops
Copy link
Contributor

@dittops dittops commented Jun 23, 2025

Core Implementation:

  • Add EmbeddingRequest, EmbeddingResponse, EmbeddingData, and EmbeddingUsage protocol models
  • Implement create_embeddings abstract method in InferenceEngine base class
  • Add VLLM engine implementation for embeddings with proper error handling
  • Create /v1/embeddings FastAPI endpoint following existing patterns

Gateway Integration:

  • Update request validation in gateway util.go to handle embeddings requests
  • Support multiple input formats (string, string array, token arrays)

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @dittops, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for a new /v1/embeddings API endpoint, allowing users to generate vector embeddings. It encompasses the definition of new data models, the extension of the core inference engine, a specific implementation for the VLLM backend, and necessary gateway integration, all complemented by thorough documentation and testing.

Highlights

  • New API Endpoint: Introduced a new /v1/embeddings FastAPI endpoint to enable the generation of vector embeddings from text inputs, adhering to the OpenAI embeddings API specification.
  • Core Protocol Models: Added new Pydantic models (EmbeddingRequest, EmbeddingResponse, EmbeddingData, EmbeddingUsage) to define the data structures for handling embedding requests and responses.
  • Inference Engine Integration: Extended the InferenceEngine base class with an abstract create_embeddings method and provided a concrete implementation for the VLLM engine, including robust error handling for external API calls.
  • Gateway & Input Flexibility: Updated the gateway to validate and route /v1/embeddings requests, supporting diverse input formats such as single strings, arrays of strings, and arrays of token IDs (both flat and nested).
  • Documentation & Testing: Included a comprehensive EMBEDDINGS_GUIDE.md for API usage and configuration, alongside extensive integration and unit tests to ensure the correctness and reliability of the new functionality and protocol models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for a /v1/embeddings endpoint. The changes are well-structured. I've provided feedback to improve error handling, ensure the implementation matches the documentation, and maintain consistency in the test suite.

Comment on lines 162 to 169
if response.status_code != HTTPStatus.OK:
return self._create_error_response(
f"Failed to create embeddings: {response.text}",
err_type="ServerError",
status_code=HTTPStatus(value=response.status_code),
)

return EmbeddingResponse(**response.json())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If the upstream service returns a 200 OK status but with an invalid JSON body, the call to response.json() or the EmbeddingResponse instantiation will raise an exception. Wrap the response parsing and model instantiation in a try...except block to handle these cases gracefully.

Suggested change
if response.status_code != HTTPStatus.OK:
return self._create_error_response(
f"Failed to create embeddings: {response.text}",
err_type="ServerError",
status_code=HTTPStatus(value=response.status_code),
)
return EmbeddingResponse(**response.json())
if response.status_code != HTTPStatus.OK:
return self._create_error_response(
f"Failed to create embeddings: {response.text}",
err_type="ServerError",
status_code=HTTPStatus(value=response.status_code),
)
try:
return EmbeddingResponse(**response.json())
except Exception as e:
logger.error(f"Failed to parse embedding response: {e}")
return self._create_error_response(
"Invalid response from inference engine",
err_type="ServerError",
status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
)

return self.cache[cache_key]

# Get embedding from API
response = await self.fetch_embedding(text, model)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The EmbeddingCache example calls self.fetch_embedding, but it's not defined. This will cause a runtime error.

Define fetch_embedding to make the API call, similar to other examples:

Suggested change
response = await self.fetch_embedding(text, model)
async def fetch_embedding(self, text: str, model: str):
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:8080/v1/embeddings",
json={
"input": text,
"model": model
}
)
response.raise_for_status() # Or other error handling
return response.json()

Comment on lines 87 to 106
case []interface{}:
// Handle array inputs
if len(v) > 0 {
if str, ok := v[0].(string); ok {
message = str
}
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for extracting a message only handles string[], and only considers the first element. It doesn't handle number[] and number[][] token arrays.

Extend this to handle numeric/token inputs to improve logging.

response = await self.client.post(
embeddings_url, json=request.model_dump(), headers=self.headers
)
except Exception as e:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a broad Exception can hide bugs. Catch httpx.RequestError for network-related issues.

Suggested change
except Exception as e:
except httpx.RequestError as e:

@dittops dittops force-pushed the feature/4-embeddings-endpoint branch from 952e2b8 to ecdfaf9 Compare June 23, 2025 06:49
dittops added 5 commits June 23, 2025 07:21
  - Add try-except for JSON parsing errors in vllm.py
  - Use specific httpx.RequestError instead of generic Exception
  - Improve token array handling in util.go for numeric inputs

Signed-off-by: dittops <[email protected]>
Signed-off-by: dittops <[email protected]>
@dittops dittops force-pushed the feature/4-embeddings-endpoint branch from 58692c4 to a4ff6a6 Compare June 23, 2025 07:23
@dittops dittops closed this Jun 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant