Skip to content

Add Cerebras [WIP] #476

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Add Cerebras [WIP] #476

wants to merge 5 commits into from

Conversation

scosman
Copy link
Collaborator

@scosman scosman commented Aug 2, 2025

Qwen-Coder attempt at adding a provider. Zero human edits (will revie…w after).

3.1M input tokens, 11k output tokens. Used Cline.

Prompt:

I want to add Cerebras as a provider to Kiln.

 - Add it in ML model list as provider (will cause type errors where we need to write code). Run the type checker (if you need to use terminal and can't see them automatically run `uv run pyright .` to ) to find issues.
 - It should be modeled like Ollama: a custom OpenAI compatible endpoing based router. See adapter_registry.py. Use it's open AI compatible endpoint `https://api.cerebras.ai/v1 `
 - run generate_schema.sh to update our frontend APIs to add the new provider. This will cause type errors you need to respove on the front end.
 - add necessary UI in connect_provider and other places

What does this PR do?

Related Issues

Contributor License Agreement

I, @, confirm that I have read and agree to the Contributors License Agreement.

Checklists

  • Tests have been run locally and passed
  • New tests have been added to any work in /lib

…w after).

3.1M input tokens, 11k output tokens. Used Cline.

Prompt:

```
I want to add Cerebras as a provider to Kiln.

 - Add it in ML model list as provider (will cause type errors where we need to write code). Run the type checker (if you need to use terminal and can't see them automatically run `uv run pyright .` to ) to find issues.
 - It should be modeled like Ollama: a custom OpenAI compatible endpoing based router. See adapter_registry.py. Use it's open AI compatible endpoint `https://api.cerebras.ai/v1 `
 - run generate_schema.sh to update our frontend APIs to add the new provider. This will cause type errors you need to respove on the front end.
 - add necessary UI in connect_provider and other places
```
Copy link
Contributor

coderabbitai bot commented Aug 2, 2025

Important

Review skipped

Ignore keyword(s) in the title.

⛔ Ignored keywords (2)
  • WIP
  • Draft

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

github-actions bot commented Aug 2, 2025

📊 Coverage Report

Overall Coverage: 91%

Diff: origin/main...HEAD

  • app/desktop/studio_server/provider_api.py (12.5%): Missing lines 300-301,358,796-797,801,803-804,808-809,816-817,821-822
  • libs/core/kiln_ai/adapters/adapter_registry.py (0.0%): Missing lines 189-190
  • libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py (50.0%): Missing lines 334
  • libs/core/kiln_ai/adapters/provider_tools.py (0.0%): Missing lines 387-388
  • libs/core/kiln_ai/datamodel/datamodel_enums.py (100%)

Summary

  • Total: 23 lines
  • Missing: 19 lines
  • Coverage: 17%

Line-by-line

View line-by-line diff coverage

app/desktop/studio_server/provider_api.py

Lines 296-305

  296                     parse_api_field(key_data, "Project Location"),
  297                 )
  298             case ModelProviderName.together_ai:
  299                 return await connect_together(parse_api_key(key_data))
! 300             case ModelProviderName.cerebras:
! 301                 return await connect_cerebras(parse_api_key(key_data))
  302             case (
  303                 ModelProviderName.kiln_custom_registry
  304                 | ModelProviderName.kiln_fine_tune
  305                 | ModelProviderName.openai_compatible

Lines 354-362

  354                     Config.shared().vertex_location = None
  355                 case ModelProviderName.together_ai:
  356                     Config.shared().together_api_key = None
  357                 case ModelProviderName.cerebras:
! 358                     Config.shared().cerebras_api_key = None
  359                 case (
  360                     ModelProviderName.kiln_custom_registry
  361                     | ModelProviderName.kiln_fine_tune
  362                     | ModelProviderName.openai_compatible

Lines 792-813

  792         )
  793 
  794 
  795 async def connect_cerebras(key: str):
! 796     try:
! 797         headers = {
  798             "Authorization": f"Bearer {key}",
  799             "Content-Type": "application/json",
  800         }
! 801         response = requests.get("https://api.cerebras.ai/v1/models", headers=headers)
  802 
! 803         if response.status_code == 401:
! 804             return JSONResponse(
  805                 status_code=401,
  806                 content={"message": "Failed to connect to Cerebras. Invalid API key."},
  807             )
! 808         elif response.status_code != 200:
! 809             return JSONResponse(
  810                 status_code=400,
  811                 content={
  812                     "message": f"Failed to connect to Cerebras. Error: [{response.status_code}]"
  813                 },

Lines 812-826

  812                     "message": f"Failed to connect to Cerebras. Error: [{response.status_code}]"
  813                 },
  814             )
  815         else:
! 816             Config.shared().cerebras_api_key = key
! 817             return JSONResponse(
  818                 status_code=200,
  819                 content={"message": "Connected to Cerebras"},
  820             )
! 821     except Exception as e:
! 822         return JSONResponse(
  823             status_code=400,
  824             content={"message": f"Failed to connect to Cerebras. Error: {str(e)}"},
  825         )

libs/core/kiln_ai/adapters/adapter_registry.py

Lines 185-194

  185                         "api_key": Config.shared().huggingface_api_key,
  186                     },
  187                 ),
  188             )
! 189         case ModelProviderName.cerebras:
! 190             return LiteLlmAdapter(
  191                 kiln_task=kiln_task,
  192                 base_adapter_config=base_adapter_config,
  193                 config=LiteLlmConfig(
  194                     run_config_properties=run_config_properties,

libs/core/kiln_ai/adapters/model_adapters/litellm_adapter.py

Lines 330-338

  330                 litellm_provider_name = "vertex_ai"
  331             case ModelProviderName.together_ai:
  332                 litellm_provider_name = "together_ai"
  333             case ModelProviderName.cerebras:
! 334                 litellm_provider_name = "cerebras"
  335             case ModelProviderName.openai_compatible:
  336                 is_custom = True
  337             case ModelProviderName.kiln_custom_registry:
  338                 is_custom = True

libs/core/kiln_ai/adapters/provider_tools.py

Lines 383-392

  383             case ModelProviderName.vertex:
  384                 return "Google Vertex AI"
  385             case ModelProviderName.together_ai:
  386                 return "Together AI"
! 387             case ModelProviderName.cerebras:
! 388                 return "Cerebras"
  389             case _:
  390                 # triggers pyright warning if I miss a case
  391                 raise_exhaustive_enum_error(enum_id)


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant