Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion src/provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,9 @@ const DEFAULT_MAX_OUTPUT_TOKENS = 16000;
// Token estimates for gpt‑oss are correct as we use the appropriate tokenizer.
// For Qwen we must first create the tokenizer from the model, as it does not use tiktoken.
// As a workaround, we also use the gpt‑oss tokenizer for now and reduce the max context length here.
const DEFAULT_CONTEXT_LENGTH = 120000;
//
// Further reduced to avoid running into rate limits for free users.
const DEFAULT_CONTEXT_LENGTH = 96000;

/**
* VS Code Chat provider backed by Privatemode OpenAI API.
Expand Down