A reverse proxy for local development environments that intercepts requests to AI services like OpenAI, Anthropic, Azure OpenAI, and xAI, allowing you to trace request and response content.
Simply put, it's like Cloudflare AI Gateway Logs specialized for local development environments. Note that it's not intended for auditing, monitoring, or team usage.
- Monitor locally without using external services
- Human-readable display of requests and responses
- Support for multiple AI services
- OpenAI
- Anthropic
- Microsoft Azure OpenAI
- xAI
- OpenAI Compatible APIs
- In-memory: No data persistence, retains up to 1000 records
Run AIApiTracer using the Docker image (ghcr.io/cysharp/aiapitracer).
docker run -p 8080:8080 ghcr.io/cysharp/aiapitracer:latest
You can download and run the pre-built application from the Releases page.
./AIApiTracer --urls http://localhost:8080/
Once started, you can view request traces by accessing http://localhost:8080
in your web browser.
Specify http://localhost:8080/endpoint/openai/v1
as the endpoint instead of https://api.openai.com/v1
.
curl http://localhost:8080/endpoint/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"messages": [
{
"role": "user",
"content": "Hello."
}
],
"model": "gpt-4.1-nano"
}'
var openAIClient = new OpenAIClient(new ApiKeyCredential(apiKey), new OpenAIClientOptions
{
Endpoint = new Uri($"http://localhost:8080/endpoint/openai/v1"),
});
Specify http://localhost:8080/endpoint/anthropic
as the endpoint instead of https://api.anthropic.com/
.
export ANTHROPIC_BASE_URL=http://localhost:8080/endpoint/anthropic
Specify http://localhost:8080/endpoint/aoai/<resource-name>
as the endpoint instead of https://<resource-name>.openai.azure.com/
.
var azureClient = new AzureOpenAIClient(
new Uri("http://localhost:8080/endpoint/aoai/my-resource"),
new AzureKeyCredential(credential)
);
Specify http://localhost:8080/endpoint/x/v1
as the endpoint instead of https://api.x.ai/v1
.
curl http://localhost:8080/endpoint/x/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"messages": [
{
"role": "user",
"content": "Hello."
}
],
"model": "grok-3-latest"
}'
var openAIClient = new OpenAIClient(new ApiKeyCredential(apiKey), new OpenAIClientOptions
{
Endpoint = new Uri($"http://localhost:8080/endpoint/x/v1"),
});
Specify http://localhost:8080/endpoint/openai-compat/<openai-compatible-endpoint>
as the endpoint.
AIApiTracer__EnableOpenAICompatForwarding
to true
. When enabled, you should absolutely avoid making it accessible from external networks as it would become an open proxy.
Specify the address of the OpenAI compatible API endpoint after openai-compat/
, including the scheme. For example, use http://localhost:8080/endpoint/openai-compat/http://localhost:5273/v1
.
curl http://localhost:8080/endpoint/openai-compat/http://localhost:5273/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"messages": [
{
"role": "user",
"content": "Hello."
}
],
"model": "Phi-4-mini-instruct-cuda-gpu"
}'
var openAIClient = new OpenAIClient(new ApiKeyCredential(apiKey), new OpenAIClientOptions
{
Endpoint = new Uri($"http://localhost:8080/endpoint/openai-compat/http://localhost:5273/v1"), // `http://localhost:5273/v1` is Foundry Local
});
- Support for more AI services (Google Vertex AI, Amazon Bedrock)
MIT License