-
Notifications
You must be signed in to change notification settings - Fork 415
Description
Description
Hi. I am attempting to use the Gemini 3 Pro Preview model via the vertexai.generative_models.GenerativeModel client. I need to configure the reasoning capabilities using the thinking_config parameter (specifically setting thinking_level).
However, when passing this configuration, I encounter a ValueError: "Unknown field for ThinkingConfig: thinking_level". Depending on how the config is structured, it fails because thinking_level is not recognized within ThinkingConfig in GenerationConfig
Upon inspecting the source code for google.cloud.aiplatform_v1.types.content, it appears the Proto definitions for ThinkingConfig is missing the field thinking_level in the current library version.
Reference locations in source:
google/cloud/aiplatform_v1/types/content.py- linkgoogle/cloud/aiplatform_v1beta1/types/content.py- link
While I understand the migration path is towards google.genai client, the vertexai.generative_models.GenerativeModel client remains the production standard for our existing applications. Migrating to the new library is a significant undertaking that will take time, so support for new model capabilities in the current SDK is crucial during this transition
Environment details
- OS type and version: Ubuntu 22.04 LTS
- Python version: 3.10.17
- pip version: 25.3
- google-cloud-aiplatform version: 1.128.0
Steps to reproduce
- Initialize vertexai with a project and location.
- Instantiate
GenerativeModel("gemini-3-pro-preview"). - Set up the
generation_configwiththinking_configdict param havingthinking_levelfield - Call
model.generate_content.
Code Example
import vertexai
from vertexai.generative_models import GenerativeModel
# 1. Initialize Vertex AI
vertexai.init(project="your-project-id", location="us-central1")
# 2. Load the model
model = GenerativeModel("gemini-3-pro-preview")
# 3. Initialize generation_config with thinking parameters
generation_config = {
"temperature": 0.05,
"top_p": 1,
"top_k": 32,
"thinking_config": {"thinking_level": 'low'}
}
# 4. Attempt to generate content
response = model.generate_content(
"Provide a list of 3 famous physicists and their key contributions",
generation_config=generation_config
)
print(response.text)Stack trace
Click to expand full stack trace
ValueError Traceback (most recent call last)
File /.venv/lib/python3.10/site-packages/proto/marshal/rules/message.py:36, in MessageRule.to_proto(self, value)
34 try:
35 # Try the fast path first.
---> 36 return self._descriptor(**value)
37 except (TypeError, ValueError, AttributeError) as ex:
38 # If we have a TypeError, ValueError or AttributeError,
39 # try the slow path in case the error
(...)
44 # - a missing key issue due to nested struct. See: https://github.com/googleapis/proto-plus-python/issues/424.
45 # - a missing key issue due to nested duration. See: https://github.com/googleapis/google-cloud-python/issues/13350.
ValueError: Protocol message ThinkingConfig has no "thinking_level" field.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[2], line 20
11 generation_config = {
12 #"max_output_tokens": 1000,
13 "temperature": 0.05,
(...)
16 "thinking_config": {"thinking_level": 'low'}
17 }
19 # 3. Attempt to generate content
---> 20 response = model.generate_content(
21 "Provide a list of 3 famous physicists and their key contributions",
22 generation_config=generation_config
23 )
25 print(response.text)
File /.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:710, in _GenerativeModel.generate_content(self, contents, generation_config, safety_settings, tools, tool_config, labels, stream)
701 return self._generate_content_streaming(
702 contents=contents,
703 generation_config=generation_config,
(...)
707 labels=labels,
708 )
709 else:
--> 710 return self._generate_content(
711 contents=contents,
712 generation_config=generation_config,
713 safety_settings=safety_settings,
714 tools=tools,
715 tool_config=tool_config,
716 labels=labels,
717 )
File /.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:825, in _GenerativeModel._generate_content(self, contents, generation_config, safety_settings, tools, tool_config, labels)
796 def _generate_content(
797 self,
798 contents: ContentsType,
(...)
804 labels: Optional[Dict[str, str]] = None,
805 ) -> "GenerationResponse":
806 """Generates content.
807
808 Args:
(...)
823 A single GenerationResponse object
824 """
--> 825 request = self._prepare_request(
826 contents=contents,
827 generation_config=generation_config,
828 safety_settings=safety_settings,
829 tools=tools,
830 tool_config=tool_config,
831 labels=labels,
832 )
833 gapic_response = self._prediction_client.generate_content(request=request)
834 return self._parse_response(gapic_response)
File /.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:3495, in GenerativeModel._prepare_request(self, contents, model, generation_config, safety_settings, tools, tool_config, system_instruction, labels)
3482 def _prepare_request(
3483 self,
3484 contents: ContentsType,
(...)
3492 labels: Optional[Dict[str, str]] = None,
3493 ) -> types_v1.GenerateContentRequest:
3494 """Prepares a GAPIC GenerateContentRequest."""
-> 3495 request_v1beta1 = super()._prepare_request(
3496 contents=contents,
3497 model=model,
3498 generation_config=generation_config,
3499 safety_settings=safety_settings,
3500 tools=tools,
3501 tool_config=tool_config,
3502 system_instruction=system_instruction,
3503 labels=labels,
3504 )
3505 serialized_message_v1beta1 = type(request_v1beta1).serialize(request_v1beta1)
3506 try:
File /.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:588, in _GenerativeModel._prepare_request(self, contents, model, generation_config, safety_settings, tools, tool_config, system_instruction, labels)
586 gapic_generation_config = generation_config._raw_generation_config
587 elif isinstance(generation_config, Dict):
--> 588 gapic_generation_config = gapic_content_types.GenerationConfig(
589 **generation_config
590 )
592 gapic_safety_settings = None
593 if safety_settings:
File /.venv/lib/python3.10/site-packages/proto/message.py:728, in Message.__init__(self, mapping, ignore_unknown_fields, **kwargs)
722 continue
724 raise ValueError(
725 "Unknown field for {}: {}".format(self.__class__.__name__, key)
726 )
--> 728 pb_value = marshal.to_proto(pb_type, value)
730 if pb_value is not None:
731 params[key] = pb_value
File /.venv/lib/python3.10/site-packages/proto/marshal/marshal.py:235, in BaseMarshal.to_proto(self, proto_type, value, strict)
232 recursive_type = type(proto_type().value)
233 return {k: self.to_proto(recursive_type, v) for k, v in value.items()}
--> 235 pb_value = self.get_rule(proto_type=proto_type).to_proto(value)
237 # Sanity check: If we are in strict mode, did we get the value we want?
238 if strict and not isinstance(pb_value, proto_type):
File /.venv/lib/python3.10/site-packages/proto/marshal/rules/message.py:46, in MessageRule.to_proto(self, value)
36 return self._descriptor(**value)
37 except (TypeError, ValueError, AttributeError) as ex:
38 # If we have a TypeError, ValueError or AttributeError,
39 # try the slow path in case the error
(...)
44 # - a missing key issue due to nested struct. See: https://github.com/googleapis/proto-plus-python/issues/424.
45 # - a missing key issue due to nested duration. See: https://github.com/googleapis/google-cloud-python/issues/13350.
---> 46 return self._wrapper(value)._pb
47 return value
File /.venv/lib/python3.10/site-packages/proto/message.py:724, in Message.__init__(self, mapping, ignore_unknown_fields, **kwargs)
721 if ignore_unknown_fields:
722 continue
--> 724 raise ValueError(
725 "Unknown field for {}: {}".format(self.__class__.__name__, key)
726 )
728 pb_value = marshal.to_proto(pb_type, value)
730 if pb_value is not None:
ValueError: Unknown field for ThinkingConfig: thinking_level