Skip to content
Closed

Dev #268

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/images/about-page-license.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/action-tab.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/add-new-license-key.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/after-flattening.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/alert-details-drawer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/associate-query.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/condition-node.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/configure-scheduled-pipelines.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/connect-nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/create-new-function.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/create-new-stream.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/download-pipeline-json.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/enterprise-branding.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/external-destination.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/filter-pipeline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/import-pipelines.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/json-file-pipeline-export.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/license-management.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/license-server.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/manage_pipelines.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-editor.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-error-view.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-error.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-import-json.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-import.png.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/pipeline-list-view.png
Binary file added docs/images/pipeline-name.png
Binary file modified docs/images/pipeline-new-editor.png
Binary file modified docs/images/pipeline-new-realtime-destination.png
Binary file modified docs/images/pipeline-new-realtime-transform-condition.png
Binary file modified docs/images/pipeline-new-scheduled-condition.png
Binary file added docs/images/pipeline-permission.png
Binary file modified docs/images/pipelines-new-realtime.png
Binary file added docs/images/pipelines-tab.png
Binary file added docs/images/query-output.png
Binary file added docs/images/save-pipeline.png
Binary file added docs/images/schedule-condition-node.png
Binary file added docs/images/schedule-connect-nodes.png
Binary file added docs/images/schedule-stream-destination.png
Binary file modified docs/images/scheduled-pipeline-config-delay.png
Binary file added docs/images/scheduled-pipeline-list-view.png
Binary file added docs/images/scheduled-pipeline-name.png
Binary file added docs/images/scheduled-variables.png
Binary file added docs/images/search-pipeline.png
Binary file added docs/images/select-existing-stream.png
Binary file added docs/images/sort-columns.png
Binary file added docs/images/stream-destination.png
Binary file added docs/images/theme-config.png
Binary file added docs/images/view-result.png
Binary file added docs/images/view-scheduled-result.png
28 changes: 7 additions & 21 deletions docs/performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -246,37 +246,23 @@ ZO_USE_MULTIPLE_RESULT_CACHE: "true" # Enable to use mulple result caches for qu
```

## Query partitioning
OpenObserve improves query responsiveness by processing large result sets in smaller units called **partitions**. A partition represents a segment of the overall query based on time range or data volume.

Query performance UX is not always about delivering query results faster. Imagine if you woudn't have to wait for the results of the query but could keep getting the results of the query incrementally as they are processed. This would be similar (but slighly better) to knowing the status of where your uber driver is and how long s/he is going to take to reach you even if it takes the same time without knowing it.
- When you run a query on the Logs page or in a dashboard panel. OpenObserve divides the query into multiple partitions.
- Each partition is processed sequentially, and partial results are returned as soon as each partition completes. This reduces waiting time and improves the time to first result during long-range or high-volume queries.

In the case of a query result on log search page or getting the results on dashboard panel, OpenObserve can partition the query to get results incrementally.
Partitioning introduces a tradeoff. Smaller partitions return early results and improve responsiveness, but they also increase the number of operations the system must perform. This can extend the total time required to complete the query. OpenObserve addresses this by combining partitioning with a streaming delivery model based on **HTTP2**.

e.g. A query for 1 day may be broken into 4 queries of 6 hours each (UI would automatically do this for you) and you would see the results of first 6 hours and then incrementally get all the results. All the requests are made incrementally by the UI. By default UI uses AJAX requests for each qyery partition.
To learn more, visit the [Steaming Aggregation](https://openobserve.ai/docs/user-guide/management/aggregation-cache/) page.

While query partitioning can improve user experience greatly, it can also reduce the overall speed of getting the result. e.g. One day query was broken into 48 individual queries. Now this query without partition may have gitten completed in 6 seconds. Howver making 48 separate HTTP requests sequentially may take 24 seconds to get the results (HTTP requests have overhead). In order to tackle this you can enable websockets. You can enable websockets using:
## Mini-partition
OpenObserve uses a mini-partition to return the first set of results faster. The mini-partition is a smaller slice of the first partition and is controlled by the environment variable `ZO_MINI_SEARCH_PARTITION_DURATION_SECS`, which defines the mini-partition duration in seconds. The default value is sixty seconds.

```shell
ZO_WEBSOCKET_ENABLED: "true"
```

Enabling websockets would also require you to setup more things if you are using a reverse proxy like nginx.

Official helm chart has all of this setup for you so you don't have to worry about it. However if you are setting it up yourself or using another environment make sure that these (or it's equivalents) are configured:

Add nginx annotations:

```yaml
nginx.ingress.kubernetes.io/proxy-http-version: "1.1" # Enable HTTP/1.1 for WebSockets
nginx.ingress.kubernetes.io/enable-websocket: "true"
# nginx.ingress.kubernetes.io/connection-proxy-header: keep-alive # disable keep alive to use websockets
nginx.ingress.kubernetes.io/proxy-set-headers: |
Upgrade $http_upgrade;
Connection "Upgrade";
```

Websockets as of 0.14.1 is an experimental feature and you must enable it from the UI as well. `Settings > General settings > Enable Websocket Search`

Result caching + Query partition + Websockets = Huge performance gains and great UX.


## Large Number of Fields
Expand Down
3 changes: 1 addition & 2 deletions docs/user-guide/.pages
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
nav:
- Concepts: concepts.md
- Logs: logs
- Traces: traces
- Metrics: metrics
- Streams: streams
- Ingestion: ingestion
- Pipelines: pipelines
- Traces: traces
- Alerts: alerts
- Dashboards: dashboards
- Actions: actions
Expand Down
22 changes: 15 additions & 7 deletions docs/user-guide/alerts/alert-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,16 @@
This guide provides information about how the Alert History feature in OpenObserve works, where the data originates from, who can access it, how to interpret the Alert History table, and how to debug failed or skipped alerts.

## Overview
All alert trigger data is stored in the triggers stream inside the `_meta` organization. Access to `_meta` is restricted and managed through IAM, which means only users with elevated privileges can view it directly.
OpenObserve records alert evaluation events in a dedicated stream called `triggers`. Each organization has its own `triggers` stream. When an alert is evaluated, the evaluation result is written to the triggers stream inside that organization. OpenObserve also writes a copy of the same event to the `triggers` stream in the `_meta` organization for organization level monitoring.

The Alert History page brings this information into the user’s organization. It provides visibility into alert evaluations, including when each alert ran, its evaluation duration, and its final status. This design allows alert owners to monitor alert performance and troubleshoot issues without requiring access to the `_meta` organization.
> An evaluation is the system checking whether the alert’s condition is true. For scheduled alerts, this check happens at the set frequency. For real time alerts, the check happens whenever new data arrives. The condition defines what should trigger the alert.
A trigger happens when the evaluation finds the condition to be true. This creates a firing event and can send a notification if one is set.

!!! note "Who can access it"
Any user who has permission to view, update, or delete alerts can also access Alert History. This ensures that alert managers and operators can analyze their alerts’ execution history without depending on users with higher administrative access.
Any user who has permission to view, update, or delete alerts can also access Alert History. Users do not need access to the `_meta` organization to view alert history for their own organization. Access to the `_meta` organization is only required when administrators need to review alert evaluation events across all organizations.

!!! note "Environment variable"
`ZO_USAGE_REPORT_TO_OWN_ORG`: Controls where alert evaluation events are stored. When it is enabled, OpenObserve writes each evaluation event to the organization’s own `triggers` stream and also keeps a copy in the `_meta` organization. This allows users to view their alert history within their own organization without requiring access to `_meta`, while still supporting organization level debugging from the `_meta` organization.

## How to interpret the Alert History table
![alert-history](../../images/alert-history.png)
Expand All @@ -22,8 +25,9 @@ Each row represents one alert evaluation.
- **Start Time** and **End Time**: The time range of data evaluated.
- **Duration**: How long the alert condition remained true.
- **Status**: The result of the alert evaluation.
- **Retries**: Number of times the system retried alert delivery when the destination did not acknowledge it. The system retries up to three times. <br>
**Note**: The environment variable `ZO_SCHEDULER_MAX_RETRIES` defines how many times the scheduler retries a failed execution.
- **Retries**: Number of times the system retried alert delivery when the destination did not acknowledge it. The system retries up to three times. <br> **Note**: The environment variable `ZO_SCHEDULER_MAX_RETRIES` defines how many times the scheduler retries a failed execution.
- **Total Evaluations**: Shows how many times the alert rule has been evaluated over the selected time range. Each evaluation corresponds to one run of the alert’s query and condition.
- **Firing Count**: Shows how many of those evaluations resulted in a firing event, that is, how many times the alert condition was satisfied and the alert was triggered.
- **Actions**: Opens a detailed view that includes:

- **Evaluation Time**: The time taken to complete the alert’s search query.
Expand All @@ -37,6 +41,9 @@ Each row represents one alert evaluation.
- **condition_not_met**: The configured alert condition was not satisfied for that time range.
- **skipped**: The scheduled evaluation window was missed due to a delay, and the system evaluated the next aligned window.

- **Alert Details** drawer: Opens when the user clicks an alert in the Alerts list. The drawer displays the alert condition, description, and evaluation history.

![Alert details drawer](../../images/alert-details-drawer.png)
## How to debug a failed alert
This process applies only to users who have access to the `_meta` organization.
![debug-alert-history](../../images/debug-alert-history.png)
Expand All @@ -62,5 +69,6 @@ This process applies only to users who have access to the `_meta` organization.

## Why you might see a skipped status
A **skipped** status appears when a scheduled alert runs later than its expected window. <br>
For example, an alert configured with a 5-minute period and 5-minute frequency is scheduled to run at 12:00 PM. It should normally evaluate data from 11:55 to 12:00.
If the alert manager experiences a delay and runs the job at 12:05 PM, it evaluates the current aligned window (12:00 to 12:05) instead of the earlier one. The earlier window (11:55 to 12:00) is marked as skipped to indicate that evaluation for that range did not occur because of delay in job pickup or data availability.
For example, an alert configured with a 5-minute period and 5-minute frequency is scheduled to run at 12:00 PM. <br>It should normally evaluate data from 11:55 to 12:00.
If the alert manager experiences a delay and runs the job at 12:05 PM, it evaluates the current aligned window (12:00 to 12:05) instead of the earlier one.<br> The earlier window (11:55 to 12:00) is marked as skipped to indicate that evaluation for that range did not occur because of delay in job pickup or data availability.

3 changes: 2 additions & 1 deletion docs/user-guide/dashboards/custom-charts/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ nav:
- Custom Charts with Flat Data: custom-charts-flat-data.md
- Custom Charts with Nested Data: custom-charts-nested-data.md
- Event Handlers and Custom Functions: custom-charts-event-handlers-and-custom-functions.md
- Custom charts for metrics using PromQL: custom-charts-for-metrics-using-promql.md
- Custom Charts for Metrics Using PromQL: custom-charts-for-metrics-using-promql.md
- Custom Charts for Metrics Using Multiple PromQL Queries: custom-charts-for-metrics-using-multiple-promql-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,7 @@ description: >-
---
This guide shows how to make your [custom charts](what-are-custom-charts.md) interactive using event handlers and reusable custom functions (customFn).

## What Are Event Handlers

## What are event handlers?
Event handlers let you define what happens when a user interacts with the chart, such as clicking or hovering over a data point. Use event handlers in the custom chart logic to display messages, log actions, or apply filters based on user input.

Common event handlers:
Expand All @@ -21,7 +20,7 @@ Before you begin, note the following:
- Use the `o2_events` block to specify the event type, such as `click`.
- Associate the event with a function that will run when the event occurs.

## How to Create Event Handlers
## How to create event handlers

### Step 1: Create a basic chart with labels and values

Expand Down Expand Up @@ -92,7 +91,7 @@ o2_events: {
}
```

## What Are Custom Functions
## What are custom functions?

Custom functions (customFn) are special sections inside the chart’s `option` object where you can define reusable functions.

Expand All @@ -102,7 +101,7 @@ These functions can be used:
- To apply logic such as filters
- To keep your event handlers simple

## How to Create Custom Functions
## How to create custom functions

**Example**: You have a chart showing bars for A, B, and C. When the user clicks on a bar, you want to format the output like:
```
Expand Down
22 changes: 11 additions & 11 deletions docs/user-guide/dashboards/custom-charts/custom-charts-flat-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ description: >-
---
The following step-by-step instructions can help you build a [custom chart that expects flat data](what-are-custom-charts.md#how-to-check-the-data-structure-a-chart-expects).

## Use Case
## Use case

Build a custom **heatmap chart** to understand which organization and search type combinations generate the most query load.


## Before You Begin
## Before you begin

To build a custom chart, you need to bridge two things:

Expand All @@ -19,7 +19,7 @@ To build a custom chart, you need to bridge two things:

> **Note**: Understanding both is important because it helps you write the right SQL query, [prepare](what-are-custom-charts.md#build-the-chart) the data through grouping or aggregation, [reshape](what-are-custom-charts.md#build-the-chart) the results to match the chart’s structure, and map them correctly in the JavaScript code that renders the chart.

## Step 1: Understand the Ingested Dataset
## Step 1: Understand the ingested dataset

In OpenObserve, the data ingested into a stream is typically in a flat structure.
**Example:** In the following dataset, each row represents a single event or query log with its own timestamp, organization ID, search type, and query duration.
Expand All @@ -39,7 +39,7 @@ In OpenObserve, the data ingested into a stream is typically in a flat structure

**Note**: Use the **Logs** page to view the data ingested to the stream.

## Step 2: Identify the Expected Data Structure
## Step 2: Identify the expected data structure

Before moving ahead, [identify what structure the chart expects](what-are-custom-charts.md#how-to-check-the-data-structure-a-chart-expects). The heatmap chart expects flat data.

Expand All @@ -51,7 +51,7 @@ In this example, each row in [data[0]](what-are-custom-charts.md#the-data-object

**Note**: For charts that expect flat data, [reshaping is not needed](what-are-custom-charts.md#build-the-chart). SQL alone is enough to prepare the data in required format.

## Step 3: Prepare the Data (via SQL)
## Step 3: Prepare the data (via SQL)

In the [Add Panel](what-are-custom-charts.md#how-to-access-custom-charts) page, under **Fields**, select the desired stream type and stream name.
![custom-chart-flat-data-add-panel](../../../images/custom-chart-flat-data-add-panel.png)
Expand Down Expand Up @@ -85,7 +85,7 @@ Select a time range to fetch the relevant dataset for your chart.

![custom-chart-flat-data-time-range-selection](../../../images/custom-chart-flat-data-time-range.png)

**Expected Query Result**
**Expected query result**

```linenums="1"
data=[[
Expand All @@ -99,7 +99,7 @@ data=[[

**Note**: OpenObserve stores the result of the query in [the `data` object](what-are-custom-charts.md#the-data-object) as an **array of an array**.

## Step 4: Inspect the Queried Dataset
## Step 4: Inspect the queried dataset

Inspect the queried dataset:

Expand All @@ -108,7 +108,7 @@ console.log(data);
console.log(data[0]);
```

## Step 5: JavaScript Code to Render the Heatmap
## Step 5: JavaScript code to render the heatmap

In the JavaScript editor, you must construct an [object named `option`](what-are-custom-charts.md#the-option-object).
This `option` object defines how the chart looks and behaves. To feed data into the chart, use the query result stored in `data[0]`
Expand Down Expand Up @@ -165,13 +165,13 @@ option = {
};
```

## Step 6: View Result
## Step 6: View result

Click **Apply** to generate the chart.

![custom-chart-flat-data-result](../../../images/custom-chart-flat-data-result.png)

### Understand the Chart
### Understand the chart

In the chart,

Expand Down Expand Up @@ -208,7 +208,7 @@ Use the following guidance to identify and fix common issues when working with c
- Open your browser's developer console to locate the error.
- Use `console.log()` to test your script step by step.

**4. Chart Not Rendering:**
**4. Chart not rendering:**
**Cause**: The query returned data, but the chart did not render.
**Fix**:

Expand Down
Loading