Lokxy is a powerful log aggregator for Loki, designed to collect and unify log streams from multiple sources into a single, queryable endpoint. It simplifies log management and enhances visibility across distributed environments, providing seamless integration with your existing Loki infrastructure.
- Motivation & Inspiration
- Requirements
- Installation
- How to Run Locally
- How to Run as a Container
- Play with Lokxy
- Configuration File
- Usage
Lokxy addresses the increasing complexity in observability workflows, especially in large-scale, distributed environments where log management across multiple instances becomes a challenge. Inspired by the design philosophy of Promxy, Lokxy provides a similar proxy-based solution but focused on log aggregation for Loki.
With Loki being a powerful log aggregation tool, Lokxy leverages it as a backend to enable users to seamlessly aggregate and query logs from multiple Loki instances. This approach is designed to simplify querying, enhance observability, and improve scalability in environments where managing logs across several backends can become inefficient.
We draw particular inspiration from Promxy for Prometheus, which bridges multiple backends into a single queryable interface. Lokxy replicates this powerful concept for logs, ensuring users have a unified interface to query, without needing to directly interact with each individual Loki instance.
Before running lokxy, ensure the following are installed:
- Go (v1.25+)
- Docker (if running as a container)
- Make (for running build scripts)
Clone the repository to get started:
git clone https://github.com/paulojmdias/lokxy.git
cd lokxyYou can install dependencies and build the project using:
go mod tidy
go build -o lokxy ./cmd/To run lokxy locally, use the following steps:
-
Prepare your configuration file: Ensure you have a config.yaml file in your working directory (or provide its path during startup). See Configuration File for details.
-
Run the proxy:
go run cmd/main.goAlternatively, after building the binary, run:
./lokxy --config config.yamlThe application will start serving at the specified port as defined in your config.yaml.
docker run --rm -it -p 8080:8080 -v $(pwd)/config.yaml:/lokxy/config.yaml lokxy:latest lokxy --config /lokxy/config.yamlThis command binds the container to port 8080 and mounts the local config.yaml file for configuration. Adjust ports and file paths as needed.
We provide a docker-compose.yml file located in the mixin/play/ folder to help you quickly get Lokxy up and running using Docker.
- Navigate to the mixin/play directory
cd mixin/play/- Start Lokxy with Docker Compose
docker-compose upThis will start 2 isolated Loki instances, 2 Promtail instances, 1 Lokxy and 1 Grafana. When it's up and running, you just need to open Grafana in http://localhost:3000, and see the data on Explore menu.
You will found 3 different data sources on Grafana:
loki1-> Instance number 1 from Lokiloki2-> Instance number 2 from Lokilokxy-> Datasource which will aggregate the data from both Loki instances
The config.yaml file defines how lokxy behaves, including details of the Loki instances to aggregate and logging options. Below are the available configuration options:
Example config.yaml:
server_groups:
- name: "Loki 1"
url: "http://localhost:3100"
timeout: 30
headers:
Authorization: "Bearer <token>"
X-Scope-OrgID: org1
- name: "Loki 2"
url: "http://localhost:3101"
timeout: 60
headers:
Authorization: "Bearer <token>"
X-Scope-OrgID: org2
- name: "Loki 3"
url: "https://localhost:3102"
timeout: 60
headers:
Authorization: "Basic <token>"
X-Scope-OrgID: org3
http_client_config:
tls_config:
insecure_skip_verify: true
logging:
level: "info" # Available options: "debug", "info", "warn", "error"
format: "json" # Available options: "json", "logfmt"-
server_groups:name: A human-readable name for the Loki instance.url: The base URL of the Loki instance.timeout: Timeout for requests in seconds.headers: Custom headers to include in each request, such as authentication tokens.http_client_config: HTTP Client custom configurationsdial_timeout: Timeout duration for establishing a connection. Defaults to 200ms.tls_config:insecure_skip_verify: If set to true, the client will not verify the server’s certificate chain or host name.ca_file: Path to a custom Certificate Authority (CA) certificate file to verify the server.cert_file: Path to the client certificate file for mutual TLS.key_file: Path to the client key file for mutual TLS.
-
logging:level: Defines the log level (debug,info,warn,error).format: The log output format, either injsonorlogfmt.
The application includes tracing instrumentation using OpenTelemetry. To collect traces, deploy an OpenTelemetry Collector or compatible tracing backend such as Jaeger, Grafana Tempo, or Zipkin.
Configure the trace export destination using standard OpenTelemetry environment variables: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT for the collector endpoint, or OTEL_EXPORTER_OTLP_ENDPOINT as a fallback. Set OTEL_EXPORTER_OTLP_INSECURE=true for development environments using insecure gRPC connections. If no endpoint is configured, the application defaults to localhost:4317 as per otlp exporter documentation. An example of this can also be fund in mixin/play/.
Once lokxy is running, you can query Loki instances by sending HTTP requests to the proxy.
The following APIs are supported:
- Querying Logs:
/loki/api/v1/query - Querying Range:
/loki/api/v1/query_range - Series API:
/loki/api/v1/series - Index Stats API:
/loki/api/v1/index/stats - Index Volume API:
/loki/api/v1/index/volume - Index Volume Range API:
/loki/api/v1/index/volume - Detected Labels API:
/loki/api/v1/detected_labels - Labels API:
/loki/api/v1/labels - Label Values API:
/loki/api/v1/label/{label_name}/values - Detected Fields API:
/loki/api/v1/detected_fields - Detected Field Values API:
/loki/api/v1/detected_field/{field_name}/values - Patterns API:
/loki/api/v1/patterns - Tailing Logs via WebSocket:
/loki/api/v1/tail
curl "http://localhost:3100/loki/api/v1/query?query={job=\"myapp\"}"Logs from all configured Loki instances will be aggregated and returned.