From 1af1a09ac377e6fb8abce4ee847d18c114f91207 Mon Sep 17 00:00:00 2001 From: rcanfield Date: Tue, 27 Feb 2024 10:01:25 -0500 Subject: [PATCH] Update release notes --- README.md | 113 +----------------------------------------------------- 1 file changed, 1 insertion(+), 112 deletions(-) diff --git a/README.md b/README.md index 1d361a0..90f2c59 100644 --- a/README.md +++ b/README.md @@ -138,118 +138,7 @@ The models above will require enough RAM to run them correctly, you should have ## Release Notes -### 0.2.7 - -Correct spelling mistake - -### 0.2.6 - -Added the ability to disable automatic code completion. - -### 0.2.5 - -Re-enable Refactor on explicit invoke only. - -### 0.2.4 - -Fix an error message when Ollama was selected as an AI Provider but not available. - -### 0.2.3 - -**Document generation** - Wingman can now generate documents! Use the "Code Actions" menu to access. - -**Refactor** - Wingman can attempt to refactor a method or class inline, simplify highlight it and use the "Code Actions" menu. - -[More information here](https://code.visualstudio.com/docs/typescript/typescript-refactoring) - -**Full Changelog**: https://github.com/RussellCanfield/wingman-ai/compare/v0.2.1...v0.2.2 - -### 0.2.1 - -**Wingman Congfig** - small release to add the rest of the config settings to the Config Panel. - -### 0.2.0 - -**Wingman Config** - a new configuration view is available in the editor, allowing you to change settings for better speed/performance without going into VSCode's settings - this is currently limited to Ollama for this release. - -**Streaming Code Completion** - as part of the new configuration screen, you can now set code completion to "streaming" which will allow for better code completion without having to wait for the full response to load. - -**Chat auto focus** - the chat input for Wingman will now auto focus on load. - -### 0.1.9 - -Chat now has a text area for easier multi-line support! Fixed an issue affecting chat context. - -### 0.1.7 - -**OpenAI support is here!** If you have OpenAI credits or pay for the subscription you can now use it in Wingman. Simply select your 'AI Provider' in the VSCode settings for this extension (Wingman), then add your API key in the OpenAI settings - identical to how HuggingFace works. - -With copilot still using GPT-3.5, you now have a faster and more powerful model at your fingertips! - -We currently only support GPT-4 but recommend GPT-4 turbo **(ex: "gpt-4-0125-preview")** - -### 0.1.6 - -Remove extraneous stop token from HuggingFace code completion. - -### 0.1.5 - -The HuggingFace chat provider now supports Mixtral! This is a high performing AI that rivals GPT. If you are using our HuggingFace provider, try it out by setting the **"chatModel"** to **"mistralai/Mixtral-8x7B-Instruct-v0.1"**. This is now the default for HuggingFace. - -### 0.1.4 - -Reworked generic LLM settings to a common "Interaction Settings" section. -Included the ability to customize the context token to include performance at the cost of quality. -For code completion this can sacrifice contextual awareness of the LLM for performance. Experiment with what feels good for your machine and code base. Below are the defaults, the previous defaults for context window(s) were **4096**. - -

- -

- -We've decided to take a conservative approach with code completion for now, here are the default values: - -```json -{ - "codeContextWindow": 256, - "codeMaxTokens": -1, - "chatContextWindow": 4096, - "chatMaxTokens": 4096 -} -``` - -### 0.1.3 - -Added logging output for the extension for troubleshooting purposes. Improved error handling and user feedback for invalidated configurations. - -

- -

- -### 0.1.2 - -- Add two new settings: - - **codeMaxTokens** - the maximum number tokens to generate in a single request (default: 1024) - - **chatMaxTokens** - the maximum number tokens to generate in a single request (default: 1024) - -These settings will help you tune how long the AI takes to generate a response, the lower the number the shorter it takes, the higher the longer it takes. -However, if you set these too low you'll get very short responses and it may not provide the functionality you are looking for. - -### 0.1.1 - -- Fix a bug with the current line terminating too early in code completion. -- Default to a stronger model for code completion (now - deepseek-coder:6.7b-base-q8_0). - -### 0.1.0 - -Initial release of Wingman-AI! This includes: - -- Ollama Phind-Codellama chat model support. -- Hugging Face support. -- Expanded context for chat and code completion. - -### 0.0.5 - -Initial pre-release of Wingman-AI! +To see the latest release notes - [check out our releases page](https://github.com/RussellCanfield/wingman-ai/releases). ---