Description
The prompt sent via chatgpt-shell-google--make-gemini-payload
includes a system instruction, and results in a JSON payload, something like this:
{
"system_instruction": {
"parts": {
"text": "You use markdown liberally to structure responses. Always show code snippets in markdown blocks with language labels."
}
},
"contents": [
{
"role": "user",
"parts": [
{
"text": "prompt from user input here"
}
]
}
],
"generation_config": {
"temperature": 1,
"topP": 1
}
}
When you POST this to gemini-2.0-flash, you get a successful response.
Gemma does not support this system_instruction
field. When you POST this to https://generativelanguage.googleapis.com/v1beta/models/gemma-3-27b-it:streamGenerateContent , the response is:
{
"error": {
"code": 400,
"message": "Developer instruction is not enabled for models/gemma-3-27b-it",
"status": "INVALID_ARGUMENT",
"details": [
{
"@type": "type.googleapis.com/google.rpc.DebugInfo",
"detail": "[ORIGINAL ERROR] generic::invalid_argument: Developer instruction is not enabled for models/gemma-3-27b-it [google.rpc.error_details_ext] { message: \"Developer instruction is not enabled for models/gemma-3-27b-it\" }"
}
]
}
}
If the system instruction text is migrated to a part of the user text, like this:
{
"contents": [
{
"role": "user",
"parts": [
{
"text": "You use markdown liberally to structure responses. Always show code snippets in markdown blocks with language labels."
},
{
"text": "prompt from user input here"
}
]
}
],
"generation_config": {
"temperature": 1,
"topP": 1
}
}
... Gemma responds successfully.
This isn't a bug in the elisp; if there is a bug, it is a usability bug. Some of the Google models support system_instruction and some do not. One could imagine special-casing the Gemma model so as not to send the system_instruction
field.