You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: semantic-kernel/concepts/ai-services/chat-completion/function-calling/index.md
+32Lines changed: 32 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -214,6 +214,7 @@ When using auto function calling in KernelFunctions, certain parameter names are
214
214
### Reserved Names
215
215
216
216
The following parameter names are reserved:
217
+
217
218
-`kernel`
218
219
-`service`
219
220
-`execution_settings`
@@ -242,6 +243,37 @@ class SimplePlugin:
242
243
returnf"Received user input: {location}, the weather is nice!"
243
244
```
244
245
246
+
## Custom Reserved Parameter Names for Auto Function Calling
247
+
248
+
You can also customize this behavior. In order to do that, you need to annotate the parameter you want to exclude from the function calling definition, like this:
249
+
250
+
```python
251
+
classSimplePlugin:
252
+
@kernel_function(name="GetWeather", description="Get the weather for a location.")
Copy file name to clipboardExpand all lines: semantic-kernel/concepts/kernel.md
+123-6Lines changed: 123 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,13 @@ ms.service: semantic-kernel
14
14
The kernel is the central component of Semantic Kernel. At its simplest, the kernel is a Dependency Injection container that manages all of the services and plugins necessary to run your AI application. If you provide all of your services and plugins to the kernel, they will then be seamlessly used by the AI as needed.
15
15
16
16
## The kernel is at the center
17
+
17
18
Because the kernel has all of the services and plugins necessary to run both native code and AI services, it is used by nearly every component within the Semantic Kernel SDK to power your agents. This means that if you run any prompt or code in Semantic Kernel, the kernel will always be available to retrieve the necessary services and plugins.
18
19
19
20

20
21
21
22
This is extremely powerful, because it means you as a developer have a single place where you can configure, and most importantly monitor, your AI agents. Take for example, when you invoke a prompt from the kernel. When you do so, the kernel will...
23
+
22
24
1. Select the best AI service to run the prompt.
23
25
2. Build the prompt using the provided prompt template.
24
26
3. Send the prompt to the AI service.
@@ -28,12 +30,13 @@ This is extremely powerful, because it means you as a developer have a single pl
28
30
Throughout this entire process, you can create events and middleware that are triggered at each of these steps. This means you can perform actions like logging, provide status updates to users, and most importantly responsible AI. All from a single place.
29
31
30
32
## Build a kernel with services and plugins
33
+
31
34
Before building a kernel, you should first understand the two types of components that exist:
32
35
33
-
|| Components| Description |
34
-
|---|---|---|
35
-
|1 |**Services**| These consist of both AI services (e.g., chat completion) and other services (e.g., logging and HTTP clients) that are necessary to run your application. This was modelled after the Service Provider pattern in .NET so that we could support dependency injection across all languages. |
36
-
|2 |**Plugins**| These are the components that are used by your AI services and prompt templates to perform work. AI services, for example, can use plugins to retrieve data from a database or call an external API to perform actions. |
36
+
|Component| Description |
37
+
|---|---|
38
+
|**Services**| These consist of both AI services (e.g., chat completion) and other services (e.g., logging and HTTP clients) that are necessary to run your application. This was modelled after the Service Provider pattern in .NET so that we could support dependency injection across all languages. |
39
+
|**Plugins**| These are the components that are used by your AI services and prompt templates to perform work. AI services, for example, can use plugins to retrieve data from a database or call an external API to perform actions. |
37
40
38
41
::: zone pivot="programming-language-csharp"
39
42
To start creating a kernel, import the necessary packages at the top of your file:
template="Please repeat this: {{$message}} and this: {{$extra}}",
116
+
input_variables=[
117
+
InputVariable(
118
+
name="message",
119
+
description="This is the message.",
120
+
is_required=True,
121
+
json_schema='{ "type": "string", "description": "This is the message."}',
122
+
),
123
+
InputVariable(
124
+
name="extra",
125
+
description="This is extra.",
126
+
default="default",
127
+
is_required=False,
128
+
json_schema='{ "type": "string", "description": "This is the message."}',
129
+
),
130
+
],
131
+
),
132
+
)
133
+
server = kernel.as_mcp_server(server_name="sk")
134
+
135
+
```
136
+
137
+
The `server` object created above comes from the mcp package, you can extend it even further for instance by adding resources, or other features to it. And then you can bring it online, for instance to be used with Stdio:
138
+
139
+
```python
140
+
import anyio
141
+
from mcp.server.stdio import stdio_server
142
+
143
+
asyncdefhandle_stdin(stdin: Any |None=None, stdout: Any |None=None) -> None:
144
+
asyncwith stdio_server() as (read_stream, write_stream):
In C#, you can use Dependency Injection to create a kernel. This is done by creating a `ServiceCollection` and adding services and plugins to it. Below is an example of how you can create a kernel using Dependency Injection.
description: Learn how to add plugins from a MCP Server to your agents in Semantic Kernel.
4
+
zone_pivot_groups: programming-languages
5
+
author: eavanvalkenburg
6
+
ms.topic: conceptual
7
+
ms.author: edvan
8
+
ms.date: 04/15/2025
9
+
ms.service: semantic-kernel
10
+
---
11
+
12
+
# Add plugins from a MCP Server
13
+
14
+
MCP is the Model Context Protocol, it is an open protocol that is designed to allow additional capabilities to be added to AI applications with ease, see [the documentation](https://modelcontextprotocol.io/introduction) for more info.
15
+
Semantic Kernel allows you to add plugins from a MCP Server to your agents. This is useful when you want to use plugins that are made available as a MCP Server.
16
+
17
+
Semantic Kernel supports both local MCP Servers, through Stdio, or servers that connect through SSE over HTTPS.
18
+
19
+
## Add plugins from a local MCP Server
20
+
21
+
To add a locally running MCP server, you can use the familiar MCP commands, like `npx`, `docker` or `uvx`, so if you want to run one of those, make sure those are installed.
22
+
23
+
For instance when you look into your claude desktop config, or the vscode settings.json, you would see something like this:
24
+
25
+
```json
26
+
{
27
+
"mcpServers": {
28
+
"github": {
29
+
"command": "docker",
30
+
"args": [
31
+
"run",
32
+
"-i",
33
+
"--rm",
34
+
"-e",
35
+
"GITHUB_PERSONAL_ACCESS_TOKEN",
36
+
"ghcr.io/github/github-mcp-server"
37
+
],
38
+
"env": {
39
+
"GITHUB_PERSONAL_ACCESS_TOKEN": "..."
40
+
}
41
+
}
42
+
}
43
+
}
44
+
```
45
+
46
+
In order to make the same plugin available to your kernel or agent, you would do this:
47
+
48
+
::: zone pivot="programming-language-python"
49
+
50
+
```python
51
+
import os
52
+
from semantic_kernel import Kernel
53
+
from semantic_kernel.connectors.mcp import MCPStdioPlugin
0 commit comments