Description
I was curious about using Seqera AI but when I open the chat and enter "@seqera Hi" I receive the following error:
Request Failed: 400 {"error":{"message":"Model is not supported for this request.","code":"model_not_supported","param":"model","type":"invalid_request_error"}}
This is in the context of a GitHub Copilot Enterprise subscription, where Preview and Beta features are disabled. The configuration page for the account also state:
"GitHub Copilot Extensions are not available for use in your account. You can contact an administrator in ORG to request access."
I am assuming that the error is caused by this limitation.
From a cursory code inspection @Seqera
appears to customize the chat prompt and the request/answer comes from the normal LLM provider. Can you confirm it works that way?
If this is the way it works, this limitation feels unintuitive as I could presumably use the prompts directly in VS code as well.
Is there a possibility to fix something on this extension to make this work out of the box?
If not perhaps the extension could offer an option to generate the corresponding vscode instructions (see https://code.visualstudio.com/docs/copilot/copilot-customization)?