Skip to content

Commit 46e4aff

Browse files
Merge pull request #73 from Portkey-AI/thinking-update
chore/thinking-update
2 parents 5a8edaa + 4e10b08 commit 46e4aff

File tree

1 file changed

+21
-0
lines changed

1 file changed

+21
-0
lines changed

openapi.yaml

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17732,6 +17732,27 @@ components:
1773217732
default: false
1773317733
stream_options:
1773417734
$ref: "#/components/schemas/ChatCompletionStreamOptions"
17735+
thinking:
17736+
type: object
17737+
nullable: true
17738+
description: |
17739+
View the thinking/reasoning tokens as part of your response. Thinking models produce a long internal chain of thought before generating a response. Supported only for specific Claude models on Anthropic, Google Vertex AI, and AWS Bedrock. Requires setting `strict_openai_compliance = false` in your API call.
17740+
properties:
17741+
type:
17742+
type: string
17743+
enum: ["enabled", "disabled"]
17744+
description: Enables or disables the thinking mode capability.
17745+
default: "disabled"
17746+
budget_tokens:
17747+
type: integer
17748+
description: |
17749+
The maximum number of tokens to allocate for the thinking process.
17750+
A higher token budget allows for more thorough reasoning but may increase overall response time.
17751+
minimum: 1
17752+
example: 2030
17753+
required:
17754+
- type
17755+
example: { "type": "enabled", "budget_tokens": 2030 }
1773517756
temperature:
1773617757
type: number
1773717758
minimum: 0

0 commit comments

Comments
 (0)