Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add new model provider PPIO #6133

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,10 @@ OPENAI_API_KEY=sk-xxxxxxxxx

# TENCENT_CLOUD_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

### PPIO ####

# PPIO_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

########################################
############ Market Service ############
########################################
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,8 @@ ENV \
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity
PERPLEXITY_API_KEY="" PERPLEXITY_MODEL_LIST="" PERPLEXITY_PROXY_URL="" \
# PPIO
PPIO_API_KEY="" PPIO_MODEL_LIST="" \
# Qwen
QWEN_API_KEY="" QWEN_MODEL_LIST="" QWEN_PROXY_URL="" \
# SenseNova
Expand Down
2 changes: 2 additions & 0 deletions Dockerfile.database
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,8 @@ ENV \
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity
PERPLEXITY_API_KEY="" PERPLEXITY_MODEL_LIST="" PERPLEXITY_PROXY_URL="" \
# PPIO
PPIO_API_KEY="" PPIO_MODEL_LIST="" \
# Qwen
QWEN_API_KEY="" QWEN_MODEL_LIST="" QWEN_PROXY_URL="" \
# SenseNova
Expand Down
1 change: 1 addition & 0 deletions README.ja-JP.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,7 @@ LobeChat の継続的な開発において、AI 会話サービスを提供す

<details><summary><kbd>See more providers (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/PPIO)**: PPIO は、DeepSeek、Llama、Qwen などの安定した費用対効果の高いオープンソース LLM API をサポートしています。[詳細はこちら](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI は、さまざまな大規模言語モデルと AI 画像生成の API サービスを提供するプラットフォームであり、柔軟で信頼性が高く、コスト効率に優れています。Llama3、Mistral などの最新のオープンソースモデルをサポートし、生成的 AI アプリケーションの開発に向けた包括的でユーザーフレンドリーかつ自動スケーリングの API ソリューションを提供し、AI スタートアップの急成長を支援します。
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI は、革新的な AI モデルを通じて先進的な性能を実現することに取り組んでおり、迅速なスケーリングサポートや直感的な展開プロセスを含む広範なカスタマイズ能力を提供し、企業のさまざまなニーズに応えています。
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI は、先進的な言語モデルサービスのリーダーであり、機能呼び出しと多モーダル処理に特化しています。最新のモデル Firefunction V2 は Llama-3 に基づいており、関数呼び出し、対話、指示の遵守に最適化されています。視覚言語モデル FireLLaVA-13B は、画像とテキストの混合入力をサポートしています。他の注目すべきモデルには、Llama シリーズや Mixtral シリーズがあり、高効率の多言語指示遵守と生成サポートを提供しています。
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,7 @@ We have implemented support for the following model service providers:

<details><summary><kbd>See more providers (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc. [Learn more](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI is a platform providing a variety of large language models and AI image generation API services, flexible, reliable, and cost-effective. It supports the latest open-source models like Llama3 and Mistral, offering a comprehensive, user-friendly, and auto-scaling API solution for generative AI application development, suitable for the rapid growth of AI startups.
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI is dedicated to achieving leading performance through innovative AI models, offering extensive customization capabilities, including rapid scaling support and intuitive deployment processes to meet various enterprise needs.
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI is a leading provider of advanced language model services, focusing on functional calling and multimodal processing. Its latest model, Firefunction V2, is based on Llama-3, optimized for function calling, conversation, and instruction following. The visual language model FireLLaVA-13B supports mixed input of images and text. Other notable models include the Llama series and Mixtral series, providing efficient multilingual instruction following and generation support.
Expand Down Expand Up @@ -629,7 +630,7 @@ If you would like to learn more details, please feel free to look at our [📘 D

## 🤝 Contributing

Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub [Issues][github-issues-link] and [Projects][github-project-link] to get stuck in to show us what youre made of.
Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub [Issues][github-issues-link] and [Projects][github-project-link] to get stuck in to show us what you're made of.

> \[!TIP]
>
Expand Down Expand Up @@ -844,7 +845,7 @@ This project is [Apache 2.0](./LICENSE) licensed.
[profile-link]: https://github.com/lobehub
[share-linkedin-link]: https://linkedin.com/feed
[share-linkedin-shield]: https://img.shields.io/badge/-share%20on%20linkedin-black?labelColor=black&logo=linkedin&logoColor=white&style=flat-square
[share-mastodon-link]: https://mastodon.social/share?text=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source,%20extensible%20(Function%20Calling),%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT/LLM%20web%20application.%20https://github.com/lobehub/lobe-chat%20#chatbot%20#chatGPT%20#openAI
[share-mastodon-link]: https://mastodon.social/share?text=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source,%20extensible%20%28Function%20Calling%29,%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT%2FLLM%20web%20application.%20https://github.com/lobehub/lobe-chat%20#chatbot%20#chatGPT%20#openAI
[share-mastodon-shield]: https://img.shields.io/badge/-share%20on%20mastodon-black?labelColor=black&logo=mastodon&logoColor=white&style=flat-square
[share-reddit-link]: https://www.reddit.com/submit?title=Check%20this%20GitHub%20repository%20out%20%F0%9F%A4%AF%20LobeChat%20-%20An%20open-source%2C%20extensible%20%28Function%20Calling%29%2C%20high-performance%20chatbot%20framework.%20It%20supports%20one-click%20free%20deployment%20of%20your%20private%20ChatGPT%2FLLM%20web%20application.%20%23chatbot%20%23chatGPT%20%23openAI&url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-chat
[share-reddit-shield]: https://img.shields.io/badge/-share%20on%20reddit-black?labelColor=black&logo=reddit&logoColor=white&style=flat-square
Expand Down
2 changes: 1 addition & 1 deletion README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ LobeChat 支持文件上传与知识库功能,你可以上传文件、图片
- **[GitHub](https://lobechat.com/discover/provider/github)**: 通过 GitHub 模型,开发人员可以成为 AI 工程师,并使用行业领先的 AI 模型进行构建。

<details><summary><kbd>See more providers (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。[了解更多](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI 是一个提供多种大语言模型与 AI 图像生成的 API 服务的平台,灵活、可靠且具有成本效益。它支持 Llama3、Mistral 等最新的开源模型,并为生成式 AI 应用开发提供了全面、用户友好且自动扩展的 API 解决方案,适合 AI 初创公司的快速发展。
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI 致力于通过创新的 AI 模型实现领先的性能,提供广泛的自定义能力,包括快速扩展支持和直观的部署流程,满足企业的各种需求。
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI 是一家领先的高级语言模型服务商,专注于功能调用和多模态处理。其最新模型 Firefunction V2 基于 Llama-3,优化用于函数调用、对话及指令跟随。视觉语言模型 FireLLaVA-13B 支持图像和文本混合输入。其他 notable 模型包括 Llama 系列和 Mixtral 系列,提供高效的多语言指令跟随与生成支持。
Expand Down
1 change: 1 addition & 0 deletions README.zh-TW.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,7 @@ LobeChat 支持文件上傳與知識庫功能,你可以上傳文件、圖片

<details><summary><kbd>瀏覽更多供應商 (+26)</kbd></summary>

- **[PPIO](https://lobechat.com/discover/provider/ppio)**: PPIO 派歐雲提供穩定、高性價比的開源模型 API 服務,支持 DeepSeek 全系列、Llama、Qwen 等行業領先大模型。[瞭解更多](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)
- **[Novita](https://lobechat.com/discover/provider/novita)**: Novita AI 是一個提供多種大語言模型與 AI 圖像生成的 API 服務的平台,靈活、可靠且具有成本效益。它支持 Llama3、Mistral 等最新的開源模型,並為生成式 AI 應用開發提供了全面、用戶友好且自動擴展的 API 解決方案,適合 AI 初創公司的快速發展。
- **[Together AI](https://lobechat.com/discover/provider/togetherai)**: Together AI 致力於通過創新的 AI 模型實現領先的性能,提供廣泛的自定義能力,包括快速擴展支持和直觀的部署流程,滿足企業的各種需求。
- **[Fireworks AI](https://lobechat.com/discover/provider/fireworksai)**: Fireworks AI 是一家領先的高級語言模型服務商,專注於功能調用和多模態處理。其最新模型 Firefunction V2 基於 Llama-3,優化用於函數調用、對話及指令跟隨。視覺語言模型 FireLLaVA-13B 支持圖像和文本混合輸入。其他 notable 模型包括 Llama 系列和 Mixtral 系列,提供高效的多語言指令跟隨與生成支持。
Expand Down
16 changes: 16 additions & 0 deletions docs/self-hosting/environment-variables/model-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,22 @@ If you need to use Azure OpenAI to provide model services, you can refer to the
- Default: `-`
- Example: `-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`

## PPIO

### `PPIO_API_KEY`

- Type: Required
- Description: This your PPIO API Key.
- Default: -
- Example: `sk_xxxxxxxxxx`

### `PPIO_MODEL_LIST`

- Type: Optional
- Description: Used to control the model list, use `+` to add a model, use `-` to hide a model, use `model_name=display_name` to customize the display name of a model, separated by commas. Definition syntax rules see [model-list][model-list]
- Default: `-`
- Example: `-all,+deepseek/deepseek-v3/community,+deepseek/deepseek-r1-distill-llama-70b`

## Github

### `GITHUB_TOKEN`
Expand Down
16 changes: 16 additions & 0 deletions docs/self-hosting/environment-variables/model-provider.zh-CN.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,22 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
- 默认值:`-`
- 示例:`-all,+01-ai/yi-34b-chat,+huggingfaceh4/zephyr-7b-beta`

## PPIO

### `PPIO_API_KEY`

- 类型:必选
- 描述:这是你在 PPIO 网站申请的 API 密钥
- 默认值:-
- 示例:`sk_xxxxxxxxxxxx`

### `PPIO_MODEL_LIST`

- 类型:可选
- 描述:用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名<扩展配置>` 来自定义模型的展示名,用英文逗号隔开。模型定义语法规则见 [模型列表][model-list]
- 默认值:`-`
- 示例:`-all,+deepseek/deepseek-v3/community,+deepseek/deepseek-r1-distill-llama-70b`

## Github

### `GITHUB_TOKEN`
Expand Down
57 changes: 57 additions & 0 deletions docs/usage/providers/ppio.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Using PPIO API Key in LobeChat
description: >-
Learn how to integrate PPIO's language model APIs into LobeChat. Follow
the steps to register, create an PPIO API key, configure settings, and
chat with our various AI models.
tags:
- PPIO
- DeepSeek
- Llama
- Qwen
- uncensored
- API key
- Web UI
---

# Using PPIO in LobeChat

<Image alt={'Using PPIO in LobeChat'} cover src={''} />

[PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc.

This document will guide you on how to integrate PPIO in LobeChat:

<Steps>
### Step 1: Register and Log in to PPIO

- Visit [PPIO](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) and create an account
- Upon registration, PPIO will provide a ¥5 credit (about 5M tokens).

<Image alt={'Register PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />

### Step 2: Obtain the API Key

- Visit PPIO's [key management page](https://ppinfra.com/settings/key-management), create and copy an API Key.

<Image alt={'Obtain PPIO API key'} inStep src={'https://github.com/user-attachments/assets/5abcf21d-5a6c-4fc8-8de6-bc47d4d2fa98'} />

### Step 3: Configure PPIO in LobeChat

- Visit the `Settings` interface in LobeChat
- Find the setting for `PPIO` under `Language Model`

<Image alt={'Enter PPIO API key in LobeChat'} inStep src={'https://github.com/user-attachments/assets/000d6a5b-f8d4-4fd5-84cd-31556c5c1efd'} />

- Open PPIO and enter the obtained API key
- Choose a PPIO model for your assistant to start the conversation

<Image alt={'Select and use PPIO model'} inStep src={'https://github.com/user-attachments/assets/207888f1-df21-4063-8e66-97b0d9cfa02e'} />

<Callout type={'warning'}>
During usage, you may need to pay the API service provider, please refer to PPIO's [pricing
policy](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link).
</Callout>
</Steps>

You can now engage in conversations using the models provided by PPIO in LobeChat.
55 changes: 55 additions & 0 deletions docs/usage/providers/ppio.zh-CN.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: 在 LobeChat 中使用 PPIO 派欧云 API Key
description: >-
学习如何将 PPIO 派欧云的 LLM API 集成到 LobeChat 中。跟随以下步骤注册 PPIO 账号、创建 API
Key、并在 LobeChat 中进行设置。
tags:
- PPIO
- PPInfra
- DeepSeek
- Qwen
- Llama3
- API key
- Web UI
---

# 在 LobeChat 中使用 PPIO 派欧云

<Image alt={'在 LobeChat 中使用 PPIO'} cover src={''} />

[PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。

本文档将指导你如何在 LobeChat 中使用 PPIO:

<Steps>
### 步骤一:注册 PPIO 派欧云账号并登录

- 访问 [PPIO 派欧云](https://ppinfra.com?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link) 并注册账号
- 注册后,PPIO 会赠送 5 元(约 500 万 tokens)的使用额度

<Image alt={'注册 PPIO'} height={457} inStep src={'https://github.com/user-attachments/assets/7cb3019b-78c1-48e0-a64c-a6a4836affd9'} />

### 步骤二:创建 API 密钥

- 访问 PPIO 派欧云的 [密钥管理页面](https://ppinfra.com/settings/key-management) ,创建并且复制一个 API 密钥.

<Image alt={'创建 PPIO API 密钥'} inStep src={'https://github.com/user-attachments/assets/5abcf21d-5a6c-4fc8-8de6-bc47d4d2fa98'} />

### 步骤三:在 LobeChat 中配置 PPIO 派欧云

- 访问 LobeChat 的 `设置` 界面
- 在 `语言模型` 下找到 `PPIO` 的设置项
- 打开 PPIO 并填入获得的 API 密钥

<Image alt={'在 LobeChat 中输入 PPIO API 密钥'} inStep src={'https://github.com/user-attachments/assets/4eaadac7-595c-41ad-a6e0-64c3105577d7'} />

- 为你的助手选择一个 Novita AI 模型即可开始对话

<Image alt={'选择并使用 PPIO 模型'} inStep src={'https://github.com/user-attachments/assets/8cf66e00-04fe-4bad-9e3d-35afc7d9aa58'} />

<Callout type={'warning'}>
在使用过程中你可能需要向 API 服务提供商付费,PPIO 的 API 费用参考[这里](https://ppinfra.com/llm-api?utm_source=github_lobe-chat&utm_medium=github_readme&utm_campaign=link)。
</Callout>
</Steps>

至此你已经可以在 LobeChat 中使用 Novita AI 提供的模型进行对话了。
3 changes: 3 additions & 0 deletions locales/en-US/providers.json
Original file line number Diff line number Diff line change
Expand Up @@ -118,5 +118,8 @@
},
"zhipu": {
"description": "Zhipu AI offers an open platform for multimodal and language models, supporting a wide range of AI application scenarios, including text processing, image understanding, and programming assistance."
},
"ppio": {
"description": "PPIO supports stable and cost-efficient open-source LLM APIs, such as DeepSeek, Llama, Qwen etc."
}
}
3 changes: 3 additions & 0 deletions locales/zh-CN/providers.json
Original file line number Diff line number Diff line change
Expand Up @@ -118,5 +118,8 @@
},
"doubao": {
"description": "字节跳动推出的自研大模型。通过字节跳动内部50+业务场景实践验证,每日万亿级tokens大使用量持续打磨,提供多种模态能力,以优质模型效果为企业打造丰富的业务体验。"
},
"ppio": {
"description": "PPIO 派欧云提供稳定、高性价比的开源模型 API 服务,支持 DeepSeek 全系列、Llama、Qwen 等行业领先大模型。"
}
}
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
"@lobehub/charts": "^1.12.0",
"@lobehub/chat-plugin-sdk": "^1.32.4",
"@lobehub/chat-plugins-gateway": "^1.9.0",
"@lobehub/icons": "^1.69.0",
"@lobehub/icons": "^1.71.0",
"@lobehub/tts": "^1.28.0",
"@lobehub/ui": "^1.164.10",
"@neondatabase/serverless": "^0.10.4",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import {
NovitaProviderCard,
OpenRouterProviderCard,
PerplexityProviderCard,
PPIOProviderCard,
QwenProviderCard,
SenseNovaProviderCard,
SiliconCloudProviderCard,
Expand Down Expand Up @@ -90,6 +91,7 @@ export const useProviderList = (): ProviderItem[] => {
SiliconCloudProviderCard,
HigressProviderCard,
GiteeAIProviderCard,
PPIOProviderCard,
],
[
AzureProvider,
Expand Down
3 changes: 3 additions & 0 deletions src/config/aiModels/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ import { default as ollama } from './ollama';
import { default as openai } from './openai';
import { default as openrouter } from './openrouter';
import { default as perplexity } from './perplexity';
import { default as ppio } from './ppio';
import { default as qwen } from './qwen';
import { default as sensenova } from './sensenova';
import { default as siliconcloud } from './siliconcloud';
Expand Down Expand Up @@ -88,6 +89,7 @@ export const LOBE_DEFAULT_MODEL_LIST = buildDefaultModelList({
openai,
openrouter,
perplexity,
ppio,
qwen,
sensenova,
siliconcloud,
Expand Down Expand Up @@ -130,6 +132,7 @@ export { default as ollama } from './ollama';
export { default as openai } from './openai';
export { default as openrouter } from './openrouter';
export { default as perplexity } from './perplexity';
export { default as ppio } from './ppio';
export { default as qwen } from './qwen';
export { default as sensenova } from './sensenova';
export { default as siliconcloud } from './siliconcloud';
Expand Down
Loading