From ec5ba7f3b7f66c59b4fa632a27f913eccfb0778b Mon Sep 17 00:00:00 2001 From: Helge Sverre Date: Wed, 30 Oct 2024 02:19:15 +0100 Subject: [PATCH] readme --- README.md | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f8fcc3b..8f9e612 100644 --- a/README.md +++ b/README.md @@ -447,7 +447,6 @@ TextractUsingS3Upload::cleanupFileUsing(function (string $filePath) { } ``` - **Note** Textract is not available in all regions: @@ -478,15 +477,21 @@ Available Models: | Model Identifier | Model | Note | |--------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `Engine::GPT_4_OMNI_MINI` | 'gpt-4o-mini' | Optimized version of GPT-4 with enhanced JSON mode capabilities | +| `Engine::GPT_4_OMNI` | 'gpt-4o' | Enhanced GPT-4 model with improved JSON mode and vision capabilities | +| `Engine::GPT_4_TURBO` | 'gpt-4-turbo' | Latest GPT-4 model optimized for performance and efficiency | | `Engine::GPT_4_1106_PREVIEW` | 'gpt-4-1106-preview' | GPT-4 Turbo, featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling. Maximum 4,096 output tokens. Preview model, not yet for production traffic. | +| `Engine::GPT_4_VISION` | 'gpt-4-vision-preview' | Multimodal model capable of processing both text and images | | `Engine::GPT_3_TURBO_1106` | 'gpt-3.5-turbo-1106' | Updated GPT-3.5 Turbo, with improvements similar to GPT-4 Turbo. Returns up to 4,096 output tokens. | -| `Engine::GPT_4` | 'gpt-4' | Large multimodal model, capable of solving complex problems with greater accuracy. Suited for both chat and traditional completions tasks. | -| `Engine::GPT4_32K` | 'gpt-4-32k' | Extended version of GPT-4 with a larger context window of 32,768 tokens. | -| `Engine::GPT_3_TURBO_INSTRUCT` | 'gpt-3.5-turbo-instruct' | Similar to `text-davinci-003`, optimized for legacy Completions endpoint, not for Chat Completions. | -| `Engine::GPT_3_TURBO_16K` | 'gpt-3.5-turbo-16k' | Extended version of GPT-3.5 Turbo, supporting a larger context window of 16,385 tokens. | -| `Engine::GPT_3_TURBO` | 'gpt-3.5-turbo' | Optimized for chat using the Chat Completions API, suitable for traditional completion tasks. | -| `Engine::TEXT_DAVINCI_003` | 'text-davinci-003' | Legacy model, better quality and consistency for language tasks. To be deprecated on Jan 4, 2024. | -| `Engine::TEXT_DAVINCI_002` | 'text-davinci-002' | Similar to `text-davinci-003` but trained with supervised fine-tuning. To be deprecated on Jan 4, 2024. | +| `Engine::GPT_O1_MINI` | 'o1-mini' | Specialized model optimized for specific tasks | +| `Engine::GPT_O1_PREVIEW` | 'o1-preview' | Preview version of the O1 model with enhanced capabilities | +| `Engine::GPT_4` | 'gpt-4' | Large multimodal model, capable of solving complex problems with greater accuracy | +| `Engine::GPT4_32K` | 'gpt-4-32k' | Extended version of GPT-4 with a larger context window of 32,768 tokens | +| `Engine::GPT_3_TURBO_INSTRUCT` | 'gpt-3.5-turbo-instruct' | Similar to text-davinci-003, optimized for legacy Completions endpoint | +| `Engine::GPT_3_TURBO_16K` | 'gpt-3.5-turbo-16k' | Extended version of GPT-3.5 Turbo, supporting a larger context window of 16,385 tokens | +| `Engine::GPT_3_TURBO` | 'gpt-3.5-turbo' | Optimized for chat using the Chat Completions API | +| `Engine::TEXT_DAVINCI_003` | 'text-davinci-003' | **DEPRECATED** - Legacy model, no longer operational | +| `Engine::TEXT_DAVINCI_002` | 'text-davinci-002' | **DEPRECATED** - Legacy model, no longer operational | **`$maxTokens` (int)**