Skip to content

Inappropriate prompt for Google / Gemma #333

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DinoChiesa opened this issue Mar 24, 2025 · 0 comments
Open

Inappropriate prompt for Google / Gemma #333

DinoChiesa opened this issue Mar 24, 2025 · 0 comments

Comments

@DinoChiesa
Copy link
Contributor

DinoChiesa commented Mar 24, 2025

The prompt sent via chatgpt-shell-google--make-gemini-payload includes a system instruction, and results in a JSON payload, something like this:

{
  "system_instruction": {
    "parts": {
      "text": "You use markdown liberally to structure responses. Always show code snippets in markdown blocks with language labels."
    }
  },
  "contents": [
    {
      "role": "user",
      "parts": [
        {
          "text": "prompt from user input here"
        }
      ]
    }
  ],
  "generation_config": {
    "temperature": 1,
    "topP": 1
  }
}

When you POST this to gemini-2.0-flash, you get a successful response.

Gemma does not support this system_instruction field. When you POST this to https://generativelanguage.googleapis.com/v1beta/models/gemma-3-27b-it:streamGenerateContent , the response is:

{
  "error": {
    "code": 400,
    "message": "Developer instruction is not enabled for models/gemma-3-27b-it",
    "status": "INVALID_ARGUMENT",
    "details": [
      {
        "@type": "type.googleapis.com/google.rpc.DebugInfo",
        "detail": "[ORIGINAL ERROR] generic::invalid_argument: Developer instruction is not enabled for models/gemma-3-27b-it [google.rpc.error_details_ext] { message: \"Developer instruction is not enabled for models/gemma-3-27b-it\" }"
      }
    ]
  }
}

If the system instruction text is migrated to a part of the user text, like this:

{
  "contents": [
    {
      "role": "user",
      "parts": [
        {
          "text": "You use markdown liberally to structure responses. Always show code snippets in markdown blocks with language labels."
        },
        {
          "text": "prompt from user input here"
        }
      ]
    }
  ],
  "generation_config": {
    "temperature": 1,
    "topP": 1
  }
}

... Gemma responds successfully.

This isn't a bug in the elisp; if there is a bug, it is a usability bug. Some of the Google models support system_instruction and some do not. One could imagine special-casing the Gemma model so as not to send the system_instruction field.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant