|
| 1 | +--- |
| 2 | +title: "FunctionGemma" |
| 3 | +model_id: "functiongemma" |
| 4 | +short_description: "Google's specialized function calling model built on Gemma 3 270M, optimized for tool use" |
| 5 | +family: "Google Gemma3" |
| 6 | +icon: "💎" |
| 7 | +is_new: true |
| 8 | +order: 0.5 |
| 9 | +type: "Text" |
| 10 | +memory_requirements: "1GB RAM" |
| 11 | +precision: "FP8" |
| 12 | +model_size: "0.5GB" |
| 13 | +hf_checkpoint: "ggml-org/functiongemma-270m-it-GGUF" |
| 14 | +minimum_jetson: "Orin Nano" |
| 15 | +supported_inference_engines: |
| 16 | + - engine: "llama.cpp" |
| 17 | + type: "Container" |
| 18 | + run_command_orin: "sudo docker run -it --rm --runtime=nvidia --network host ghcr.io/nvidia-ai-iot/llama_cpp:latest-jetson-orin llama-server --jinja -fa on -hf ggml-org/functiongemma-270m-it-GGUF --alias functiongemma" |
| 19 | + run_command_thor: "sudo docker run -it --rm --runtime=nvidia --network host ghcr.io/nvidia-ai-iot/llama_cpp:latest-jetson-thor llama-server --jinja -fa on -hf ggml-org/functiongemma-270m-it-GGUF --alias functiongemma" |
| 20 | +--- |
| 21 | + |
| 22 | +FunctionGemma is a lightweight, open model from Google, built as a foundation for creating your own specialized function calling models. Built on the Gemma 3 270M model and with the same research and technology used to create the Gemini models, FunctionGemma has been trained specifically for function calling. The model has the same architecture as Gemma 3, but uses a different chat format optimized for tool use. |
| 23 | + |
| 24 | +**Note:** FunctionGemma is not intended for use as a direct dialogue model. It is designed to be highly performant after further fine-tuning, as is typical of models this size. The model is well suited for text-only function calling scenarios. |
| 25 | + |
| 26 | +This model is extremely good for applications like home assistant where based on voice actions, we pass it through text-to-speech (TTS) and then use the model for calling the appropriate tool. For example, commands like "close the lights," "open the garage," "set the thermostat to 72 degrees," or "turn on the coffee maker" can be processed efficiently. The model is capable of calling tools in parallel as well, making it efficient for handling multiple commands or complex multi-step actions. |
| 27 | + |
| 28 | +## Supported Platforms |
| 29 | + |
| 30 | +- ✅ Jetson Orin (Orin Nano, Orin NX, AGX Orin) |
| 31 | +- ✅ Jetson Thor |
| 32 | + |
| 33 | +You can use FunctionGemma with your favorite orchestration framework or any library/software that supports OpenAI-compatible API backends. |
| 34 | + |
| 35 | +## Getting Started |
| 36 | + |
| 37 | +### Quick Hello World Example |
| 38 | + |
| 39 | +Here's a simple CLI example to get you started with function calling: |
| 40 | + |
| 41 | +```bash |
| 42 | +curl http://localhost:8080/v1/chat/completions -d '{ |
| 43 | + "model": "functiongemma", |
| 44 | + "messages": [ |
| 45 | + {"role": "system", "content": "You are a chatbot that uses tools/functions. Dont overthink things."}, |
| 46 | + {"role": "user", "content": "What is the weather in Istanbul?"} |
| 47 | + ], |
| 48 | + "tools": [{ |
| 49 | + "type":"function", |
| 50 | + "function":{ |
| 51 | + "name":"get_current_weather", |
| 52 | + "description":"Get the current weather in a given location", |
| 53 | + "parameters":{ |
| 54 | + "type":"object", |
| 55 | + "properties":{ |
| 56 | + "location":{ |
| 57 | + "type":"string", |
| 58 | + "description":"The city and country/state, e.g. `San Francisco, CA`, or `Paris, France`" |
| 59 | + } |
| 60 | + }, |
| 61 | + "required":["location"] |
| 62 | + } |
| 63 | + } |
| 64 | + }] |
| 65 | +}' |
| 66 | +``` |
| 67 | + |
| 68 | +### Parallel Tool Calling |
| 69 | + |
| 70 | +To enable parallel tool calling, simply add `"parallel_tool_calls": true` to your request payload: |
| 71 | + |
| 72 | +```bash |
| 73 | +curl http://localhost:8080/v1/chat/completions -d '{ |
| 74 | + "model": "functiongemma", |
| 75 | + "parallel_tool_calls": true, |
| 76 | + "messages": [ |
| 77 | + {"role": "user", "content": "Turn on the living room lights and set the temperature to 70"} |
| 78 | + ], |
| 79 | + "tools": [...] |
| 80 | +}' |
| 81 | +``` |
| 82 | + |
| 83 | +## Key Features |
| 84 | + |
| 85 | +- 🎯 **Specialized for Function Calling**: Purpose-built for tool use and API calling |
| 86 | +- ⚡ **Lightweight**: Only 270M parameters, runs efficiently on edge devices |
| 87 | +- 🔄 **Parallel Execution**: Call multiple tools simultaneously |
| 88 | + |
| 89 | +## Inputs and outputs |
| 90 | + |
| 91 | +**Input:** |
| 92 | +- Text string with system and user messages |
| 93 | +- Tool/function definitions in OpenAI format |
| 94 | +- Support for parallel tool calling with flag |
| 95 | + |
| 96 | +**Output:** |
| 97 | +- Structured function calls with appropriate parameters |
| 98 | +- Compatible with OpenAI chat completions format |
| 99 | +- JSON-formatted tool invocations |
| 100 | + |
0 commit comments