You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: amazon_msk/README.md
+4-1
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,8 @@ Amazon Managed Streaming for Apache Kafka (MSK) is a fully managed service that
6
6
7
7
You can collect metrics from this integration in two ways-with the [Datadog Agent](#setup) or with a [Crawler][18] that collects metrics from CloudWatch.
8
8
9
+
Consider [Data Streams Monitoring][20] to enhance your MSK integration. This solution enables pipeline visualization and lag tracking, helping you identify and resolve bottlenecks.
10
+
9
11
## Setup
10
12
11
13
The Agent check monitors Amazon Managed Streaming for Apache Kafka ([Amazon MSK][1]) through the Datadog Agent.
@@ -83,4 +85,5 @@ Additional helpful documentation, links, and articles:
Use the Anthropic integration to monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Anthropic's models.
4
+
5
+
Use the Anthropic integration to monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Anthropic's models.
5
6
6
7
If you are building LLM applications, use LLM Observability to investigate the root cause of issues,
7
8
monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
@@ -10,63 +11,106 @@ See the [LLM Observability tracing view video](https://imgix.datadoghq.com/video
10
11
11
12
## Setup
12
13
14
+
# Configuring Anthropic LLM Evaluations for Datadog
15
+
16
+
## Overview
17
+
18
+
Datadog's LLM Observability enables end-to-end monitoring of your LLM application using Anthropic models. Follow the steps below to configure your Anthropic integration for LLM Evaluations.
19
+
20
+
## Prerequisites
21
+
22
+
- An **Anthropic account** with access to model deployments.
23
+
- A **valid Anthropic API key** with **write permissions** for model capabilities.
24
+
25
+
## Setup
26
+
27
+
### 1. Generate an Anthropic API key
28
+
29
+
1. Login to your [Anthropic dashboard][3].
30
+
2. Navigate to **API keys** under your profile.
31
+
3. Click the **Create Key** button.
32
+
- For LLM Observability, ensure that the API key has **write** permission for **model capabilities**. This allows Datadog to invoke models in your Anthropic account.
33
+
4. Copy the created API key to your clipboard.
34
+
35
+
### 2. Configure Datadog's Anthropic integration
36
+
37
+
1. Navigate to the configuration tab inside Datadog's Anthropic integration tile.
38
+
2. On the **Configure** tab, click **Add New**.
39
+
3. Under **Name**, enter a name for your account. Under **API key**, enter your Anthropic API key.
40
+
4. Click the check mark to save.
41
+
42
+
43
+
### Additional Notes
44
+
45
+
- This integration allows LLM Observability to track Anthropic model performance.
46
+
- No additional permissions are required beyond enabling write access for model capabilities.
47
+
48
+
## Additional Resources
49
+
50
+
-[Anthropic API Documentation][4]
51
+
52
+
13
53
### LLM Observability: Get end-to-end visibility into your LLM application using Anthropic
54
+
14
55
You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:
15
56
16
57
#### Installation for Python
17
58
18
59
##### If you do not have the Datadog Agent:
60
+
19
61
1. Install the `ddtrace` package:
20
62
21
-
```shell
22
-
pip install ddtrace
23
-
```
63
+
```shell
64
+
pip install ddtrace
65
+
```
24
66
25
67
2. Start your application using the following command to enable Agentless mode:
**Note**: In serverless environments, Datadog automatically flushes spans at the end of the Lambda function.
72
116
@@ -75,6 +119,7 @@ You can enable LLM Observability in different environments. Follow the appropria
75
119
The Anthropic integration allows for automatic tracing of chat message calls made by the Anthropic Python SDK, capturing latency, errors, input/output messages, and token usage during Anthropic operations.
76
120
77
121
The following methods are traced for both synchronous and asynchronous Anthropic operations:
@@ -84,23 +129,23 @@ No additional setup is required for these methods.
84
129
85
130
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the `dd-trace` integration:
86
131
87
-
```shell
88
-
ddtrace-run --info
89
-
```
132
+
```shell
133
+
ddtrace-run --info
134
+
```
90
135
91
136
Look for the following message to confirm the setup:
92
137
93
-
```shell
94
-
Agent error: None
95
-
```
138
+
```shell
139
+
Agent error: None
140
+
```
96
141
97
142
##### Debugging
98
143
99
144
If you encounter issues during setup, enable debug logging by passing the `--debug` flag:
100
145
101
-
```shell
102
-
ddtrace-run --debug
103
-
```
146
+
```shell
147
+
ddtrace-run --debug
148
+
```
104
149
105
150
This displays any errors related to data transmission or instrumentation, including issues with Anthropic traces.
106
151
@@ -124,4 +169,5 @@ Need help? Contact [Datadog support][2].
Copy file name to clipboardExpand all lines: cert_manager/CHANGELOG.md
+6
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,12 @@
2
2
3
3
<!-- towncrier release notes start -->
4
4
5
+
## 5.3.0 / 2025-03-19
6
+
7
+
***Added***:
8
+
9
+
* Add collection of the `certmanager_certificate_renewal_timestamp_seconds` metric as `certificate.renewal_timestamp` ([#19643](https://github.com/DataDog/integrations-core/pull/19643))
0 commit comments