Skip to content

Add DeepSeek Provider Support #1038

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from

Conversation

Plastikov
Copy link

Implements DeepSeek API as a new provider for avante.nvim, enabling dynamic model switching between chat and code modes.

Features

  • Dynamic model switching based on context
  • Support for both chat and code completion modes
  • Automatic context detection
  • Error handling and response parsing
  • Test coverage for core functionality

Changes

  • Added new DeepSeek provider module
  • Added test suite for provider functionality
  • Updated gitignore patterns for development files
  • Added test documentation

Testing

Tests can be run using:
nvim --headless -c "PlenaryBustedDirectory tests/deepseek_spec.lua"

Current test results:

  • ✅ Provider loads with correct defaults
  • ✅ Model switching works correctly
  • ✅ Streaming responses parse correctly
  • ❌ Error handling test needs fixing
  • ✅ Content context switching works
  • ✅ Mixed conversation handling works
  • ✅ Edge case handling works
  • ✅ Context maintenance works

Next Steps

  • Fix failing error handling test
  • Add more comprehensive error scenarios
  • Document configuration options

- Add DeepSeek provider implementation with OpenAI-compatible API
- Implement dynamic switching between chat and coder models based on content
- Add comprehensive test suite with mock responses
- Add error handling for DeepSeek-specific API errors
- Add code pattern detection for model switching
- Update config with DeepSeek provider settings

The implementation includes:
- Content-aware model selection
- Streaming response handling
- Error handling with detailed messages
- Test coverage for edge cases
- Debug logging for troubleshooting

Files changed:
- lua/avante/config.lua
- lua/avante/providers/deepseek.lua
- tests/deepseek_spec.lua
- Add cursor rules files (.cursorrules and *.cursorrules)
- Add test documentation (tests/README.md)
- Add test error logs (tests/errorlog.txt and tests/*.log)
@yetone yetone self-assigned this Jan 5, 2025
@yetone
Copy link
Owner

yetone commented Jan 5, 2025

‌‌‌‌‌‌I have to say this is a perfect PR.

@boot2linux
Copy link

nice!

@Plastikov Plastikov closed this Jan 5, 2025
@Plastikov Plastikov reopened this Jan 5, 2025
@nfwyst
Copy link

nfwyst commented Jan 6, 2025

cool

@yetone
Copy link
Owner

yetone commented Jan 6, 2025

‌‌‌‌‌‌‌You can use this command to solve the lua style lint problem: stylua ./lua ./plugin

https://github.com/JohnnyMorganz/StyLua

- Fix formatting in tests/deepseek_spec.lua to comply with stylua rules
- Adjust indentation and spacing
- Fix line length issues
- Ensure consistent code style

This commit addresses the CI lint check failures in the PR.
@aliresool621
Copy link

What about Mistral.ai , they have amazing models r
Too

@yetone
Copy link
Owner

yetone commented Jan 7, 2025

I suddenly realized that deepseek API is OpenAI API compatible, and I've been using this configuration all along. It seems there's no need to implement a separate provider for it.

image

@yetone yetone force-pushed the feature/deepseek-provider branch from b9104b8 to de1535b Compare January 7, 2025 06:39
@herschel-ma
Copy link
Contributor

herschel-ma commented Jan 7, 2025

I suddenly realized that deepseek API is OpenAI API compatible, and I've been using this configuration all along. It seems there's no need to implement a separate provider for it.

image

Yeah, I've been used avante.nvim with deepseek-v3 model 2 days.
There is a configuration named awesome-deepseek-integration , that metioned
how to config deepseek model opts with openai provider:

opts = {
      provider = "openai",
      auto_suggestions_provider = "openai", -- Since auto-suggestions are a high-frequency operation and therefore expensive, it is recommended to specify an inexpensive provider or even a free provider: copilot
      openai = {
        endpoint = "https://api.deepseek.com/v1",
        model = "deepseek-chat",
        timeout = 30000, -- Timeout in milliseconds
        temperature = 0,
        max_tokens = 4096,
        ["local"] = false,
      },
    }

@Plastikov
Copy link
Author

I suddenly realized that deepseek API is OpenAI API compatible, and I've been using this configuration all along. It seems there's no need to implement a separate provider for it.

image

Oh, that is good then. I will be closing this pull request since there will be no need for the provider module. Thanks for your input, man.

@Plastikov Plastikov closed this Jan 7, 2025
@yetone
Copy link
Owner

yetone commented Jan 7, 2025

@Plastikov Thank you for your PR. The test code in your PR is very meaningful to me, and I will introduce tests for Avante next.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants