Skip to content

Commit 8f01e9c

Browse files
peterbarnett03Peter Barnett
and
Peter Barnett
authored
chore: Update README for InfluxDB main repo (influxdata#25101)
* chore: update README content * chore: update README content * fix: updating Cargo.lock and semantics * fix: adding dark mode logo with dynamic picture instead of img tag * fix: adding dynamic picture instead of img tag * fix: adding updated dark mode logo * fix: limiting logo size to 600px * fix: limiting logo size to 600px via width tag --------- Co-authored-by: Peter Barnett <[email protected]>
1 parent b30729e commit 8f01e9c

File tree

3 files changed

+24
-185
lines changed

3 files changed

+24
-185
lines changed

README.md

+24-185
Original file line numberDiff line numberDiff line change
@@ -1,201 +1,40 @@
1-
# InfluxDB Edge
1+
<div align="center">
2+
<picture>
3+
<source media="(prefers-color-scheme: light)" srcset="assets/influxdb-logo.png">
4+
<source media="(prefers-color-scheme: dark)" srcset="assets/influxdb-logo-dark.png">
5+
<img src="assets/influxdb-logo.png" alt="InfluxDB Logo" width="600">
6+
</picture>
7+
<p>
8+
InfluxDB is the leading open source time series database for metrics, events, and real-time analytics.</p>
29

3-
> [!NOTE]
4-
> On 2023-09-21 this repo changed the default branch from master to main. At the same time, we moved all InfluxDB 2.x development into the main-2.x branch. If you relied on the 2.x codebase in the former master branch, update your tooling to point to main-2.x, which is the new home for any future InfluxDB 2.x development. This branch (main) is now the default branch for this repo and is for development of InfluxDB 3.x.
5-
>
6-
> For now, this means that InfluxDB 3.0 and its upstream dependencies are the focus of our open source efforts. We continue to support both the 1.x and 2.x versions of InfluxDB for our customers, but our new development efforts are now focused on 3.x. The remainder of this readme has more details on 3.0 and what you can expect.
10+
</div>
711

8-
InfluxDB is an open source time series database written in Rust, using Apache Arrow, Apache Parquet, and Apache DataFusion as its foundational building blocks. This latest version (3.x) of InfluxDB focuses on providing a real-time buffer for observational data of all kinds (metrics, events, logs, traces, etc.) that is queryable via SQL or InfluxQL, and persisted in bulk to object storage as Parquet files, which other third-party systems can then use. It is able to run either with a write ahead log or completely off object storage if the write ahead log is disabled (in this mode of operation there is a potential window of data loss for any data buffered that has not yet been persisted to object store).
9-
10-
The open source project runs as a standalone system in a single process. If you're looking for a clustered, distributed time series database with a bunch of enterprise security features, we have a commercial offering available as a managed hosted service or as on-premise software designed to run inside Kubernetes. The distributed version also includes functionality to reorganize the files in object storage for optimal query performance. In the future, we intend to have a commercial version of the single server software that adds fine-grained security, federated query capabilities, file reorganization for query optimization and deletes, and integration with other systems.
1112

1213
## Project Status
14+
This main branch contains InfluxDB v3 in pre-release and under active development. Build artifacts are not yet generally available and official installation instructions will be coming later this year. For now, a Dockerfile is provided and can be adapted or used for inspiration by intrepid users.
1315

14-
Currently this project is under active prototype development without documentation or official builds. This README will be updated with getting started details and links to docs when the time comes.
15-
16-
## Roadmap
17-
18-
The scope of this open source InfluxDB 3.0 is different from either InfluxDB 1.x or 2.x. This may change over time, but for now here are the basics of what we have planned:
19-
20-
* InfluxDB 1.x and 2.x HTTP write API (supporting Line Protocol)
21-
* InfluxDB 1.x HTTP query API (InfluxQL)
22-
* Flight SQL (query API using SQL)
23-
* InfluxQL over Flight
24-
* Data migration tooling for InfluxDB 1.x & 2.x to 3.0
25-
* InfluxDB 3.0 HTTP write API (a new way to write data with a more expressive data model than 1.x or 2.x)
26-
* InfluxDB 3.0 HTTP query API (send InfluxQL or SQL queries as an HTTP GET and get back JSON lines, CSV, or pretty print response)
27-
* Persist event stream (subscribe to the Parquet file persist events, useful for downstream clients to pick up files from object store)
28-
* Embedded VM (either Python, Javascript, WASM, or some combination thereof)
29-
* Individual queries
30-
* Triggers on write
31-
* On persist (run whatever is being persisted through script)
32-
* On schedule
33-
* Bearer token authentication (all or nothing, token is set at startup through env variable, more fine-grained security is outside the scope of the open source effort)
34-
35-
What this means is that InfluxDB 3.0 can be pointed to as though it is an InfluxDB 1.x server with most of the functionality present. For InfluxDB 2.x users that primarily interact with the database through the InfluxQL query capability, they will also be able to use this database in a similar way. Version 3.0 will not be implementing the rest of the 2.x API natively, although there could be separate processes that could be added on at some later date that would provide that functionality.
36-
37-
## Flux
38-
39-
Flux is the custom scripting and query language we developed as part of our effort on InfluxDB 2.0. While we will continue to support Flux for our customers, it is noticeably absent from the description of InfluxDB 3.0. Written in Go, we built Flux hoping it would get broad adoption and empower users to do things with the database that were previously impossible. While we delivered a powerful new way to work with time series data, many users found Flux to be an adoption blocker for the database.
40-
41-
We spent years of developer effort on Flux starting in 2018 with a small team of developers. However, the size of the effort, including creating a new language, VM, query planner, parser, optimizer and execution engine, was significant. We ultimately weren’t able to devote the kind of attention we would have liked to more language features, tooling, and overall usability and developer experience. We worked constantly on performance, but because we were building everything from scratch, all the effort was solely on the shoulders of our small team. We think this ultimately kept us from working on the kinds of usability improvements that would have helped Flux get broader adoption.
42-
43-
For InfluxDB 3.0 we adopted Apache Arrow DataFusion, an existing query parser, planner, and executor as our core engine. That was in mid-2020, and over the course of the last three years, there have been significant contributions from an active and growing community. While we remain major contributors to the project, it is continuously getting feature enhancements and performance improvements from a worldwide pool of developers. Our efforts on the Flux implementation would simply not be able to keep pace with the much larger group of DataFusion developers.
44-
45-
With InfluxDB 3.0 being a ground-up rewrite of the database in a new language (from Go to Rust), we weren’t able to bring the Flux implementation along. For InfluxQL we were able to support it natively by writing a language parser in Rust and then converting InfluxQL queries into logical plans that our new native query engine, Apache Arrow DataFusion, can understand and process. We also had to add new capabilities to the query engine to support some of the time series queries that InfluxQL enables. This is an effort that took a little over a year and is still ongoing. This approach means that the contributions to DataFusion become improvements to InfluxQL as well given it is the underlying engine.
46-
47-
Initially, our plan to support Flux in 3.0 was to do so through a lower level API that the database would provide. In our Cloud2 product, Flux processes connect to the InfluxDB 1 & 2 TSM storage engine through a gRPC API. We built support for this in InfluxDB 3.0 and started testing with mirrored production workloads. We quickly found that this interface performed poorly and had unforeseen bugs, eliminating it as a viable option for Flux users to bring their scripts over to 3.0. This is due to the API being designed around the TSM storage engine’s very specific format, which the 3.0 engine is unable to serve up as quickly.
48-
49-
We’ll continue to support Flux for our users and customers. But given Flux is a scripting language in addition to being a query language, planner, optimizer, and execution engine, a Rust-native version of it is likely out of reach. And because the surface area of the language is so large, such an effort would be unlikely to yield a version that is compatible enough to run existing Flux queries without modification or rewrites, which would eliminate the point of the effort to begin with.
50-
51-
For Flux to have a path forward, we believe the best plan is to update the core engine so that it can use Flight SQL to talk to InfluxDB 3.0. This would make an architecture where independent processes that serve the InfluxDB 2.x query API (i.e. Flux) would be able to convert whatever portion of a Flux script that is a query into a SQL query that gets sent to the InfluxDB 3.0 process with the result being post-processed by the Flux engine.
52-
53-
This is likely not a small effort as the Flux engine is built around InfluxDB 2.0's TSM storage engine and the representation of all data as individual time series. InfluxDB 3.0 doesn't keep a concept of series so the SQL query would either have to do a bunch of work to return individual series, or the Flux engine would do work with the resulting query response to construct the series. For the moment, we’re focused on improvements to the core SQL and (and by extension InfluxQL) query engine and experience both in InfluxDB 3.0 and DataFusion.
54-
55-
We may come back to this effort in the future, but we don’t want to stop the community from self-organizing an effort to bring Flux forward. The Flux runtime and language exists as permissively licensed open source [here](https://github.com/InfluxCommunity/flux). We've also created a community fork of Flux where the community can self-organize and move development forward without requiring our code review process. There are already a few community members working on this potential path forward. If you're interested in helping with this effort, please speak up on this tracked issue.
56-
57-
We realize that Flux still has an enthusiastic, if small, user base and we’d like to figure out the best path forward for these users. For now, with our limited resources, we think focusing our efforts on improvements to Apache Arrow DataFusion and InfluxDB 3.0’s usage of it is the best way to serve our users that are willing to convert to either InfluxQL or SQL. In the meantime, we’ll continue to maintain Flux with security and critical fixes for our users and customers.
58-
59-
## Install
60-
61-
> [!NOTE]
62-
> InfluxDB Edge is pre-release and in active development. Build artifacts are not yet generally available and official installation instructions will be forthcoming as InfluxDB Edge approaches release. For the time being, a Dockerfile is provided and can be adapted or used for inspiration by intrepid users.
63-
64-
## Getting Started
65-
66-
> [!NOTE]
67-
> InfluxDB Edge is pre-release and in active development. These usage instructions are only designed for a very quick start and subject to change.
68-
69-
The following assumes that `influxdb3` exists in your current directory. Please adjust as necessary for your environment. Help can be seen with `influxdb3 --help`.
70-
71-
### Starting the server
72-
73-
`influxdb3` has many different configuration options, which may be controlled via command line arguments or environment variable. See `influxdb3 serve --help` for details.
74-
75-
Eg, to start the server with a local filesystem object store (default listens on port 8181 with no authentication):
76-
```
77-
$ export INFLUXDB_IOX_OBJECT_STORE=file
78-
$ export INFLUXDB_IOX_DB_DIR=/path/to/influxdb3
79-
$ influxdb3 serve
80-
2024-05-10T15:54:24.446764Z INFO influxdb3::commands::serve: InfluxDB3 Edge server starting git_hash=v2.5.0-14032-g1b7cd1976d65bc7121df7212cb234fca5c5fa899 version=0.1.0 uuid=3c0e0e61-ae4e-46cb-bba3-9ccd7b5feecd num_cpus=20 build_malloc_conf=
81-
2024-05-10T15:54:24.446947Z INFO clap_blocks::object_store: Object Store db_dir="/path/to/influxdb3" object_store_type="Directory"
82-
2024-05-10T15:54:24.447035Z INFO influxdb3::commands::serve: Creating shared query executor num_threads=20
83-
...
84-
```
85-
86-
### Interacting with the server
87-
The `influxdb3` binary is also used as a client.
88-
89-
#### Writing
90-
```
91-
# write help
92-
$ influxdb3 write --help
93-
...
94-
95-
# create some points in a file for the client to write from using line protocol
96-
$ cat > ./file.lp <<EOM
97-
mymeas,mytag1=sometag value=0.54 $(date +%s%N)
98-
mymeas,mytag1=sometag value=0.55 $(date +%s%N)
99-
EOM
100-
101-
# perform the write
102-
$ influxdb3 write --dbname mydb -f ./file.lp
103-
success
104-
```
105-
106-
See the [InfluxDB documentation](https://docs.influxdata.com/influxdb/cloud-dedicated/reference/syntax/line-protocol/) for details on line protocol.
107-
108-
`curl` can also be used with the `/api/v3/write_lp` API (subject to change):
109-
```
110-
$ export URL="http://localhost:8181"
111-
$ curl -s -X POST "$URL/api/v3/write_lp?db=mydb" --data-binary @file.lp
112-
```
11316

114-
#### Querying
115-
```
116-
# query help
117-
$ influxdb3 query --help
118-
...
17+
## Learn InfluxDB
18+
[Documentation](https://docs.influxdata.com/) | [Community Forum](https://community.influxdata.com/) | [Community Slack](https://www.influxdata.com/slack/) | [Blog](https://www.influxdata.com/blog/) | [InfluxDB University](https://university.influxdata.com/) | [YouTube](https://www.youtube.com/@influxdata8893)
11919

120-
# perform a query with SQL
121-
$ influxdb3 query --dbname mydb "SELECT * from mymeas"
122-
+---------+-------------------------------+-------+
123-
| mytag1 | time | value |
124-
+---------+-------------------------------+-------+
125-
| sometag | 2024-05-09T21:08:52.227359715 | 0.54 |
126-
| sometag | 2024-05-09T21:08:52.229494943 | 0.55 |
127-
+---------+-------------------------------+-------+
20+
Try **InfluxDB Cloud** for free and get started fast with no local setup required. Click [here](https://cloud2.influxdata.com/signup) to start building your application on InfluxDB Cloud.
12821

129-
# perform a query with InfluxQL
130-
$ influxdb3 query --lang influxql --dbname mydb "SELECT * from mymeas"
131-
+------------------+-------------------------------+---------+-------+
132-
| iox::measurement | time | mytag1 | value |
133-
+------------------+-------------------------------+---------+-------+
134-
| mymeas | 2024-05-09T21:08:52.227359715 | sometag | 0.54 |
135-
| mymeas | 2024-05-09T21:08:52.229494943 | sometag | 0.55 |
136-
+------------------+-------------------------------+---------+-------+
137-
```
13822

139-
`curl` can also be used with the `/api/v3/query_sql` and `/api/v3/query_influxql` APIs (subject to change):
140-
```
141-
$ export URL="http://localhost:8181"
142-
$ curl -s -H "Content-Type: application/json" \
143-
-X POST "$URL/api/v3/query_sql" \
144-
--data-binary '{"db": "mydb", "q": "SELECT * from mymeas"}' | jq
145-
[
146-
{
147-
"mytag1": "sometag",
148-
"time": "2024-05-09T21:08:52.227359715",
149-
"value": 0.54
150-
},
151-
{
152-
"mytag1": "sometag",
153-
"time": "2024-05-09T21:08:52.229494943",
154-
"value": 0.55
155-
}
156-
]
157-
```
23+
## Installation
24+
We have nightly and versioned Docker images, Debian packages, RPM packages, and tarballs of InfluxDB available on the [InfluxData downloads page](https://portal.influxdata.com/downloads/). We also provide the InfluxDB command line interface (CLI) client as a separate binary available at the same location.
15825

159-
By default, `application/json` is returned. This can be adjusted via the `format` key which can be one of `json` (`application/json`; default), `pretty` (`application/plain`) or `parquet` (`application/vnd.apache.parquet`). Eg, `influxql` with `pretty` format:
160-
```
161-
$ curl -s -H "Content-Type: application/json" \
162-
-X POST "$URL/api/v3/query_influxql" \
163-
--data-binary '{"db": "mydb", "q": "SELECT * from mymeas", "format": "pretty"}'
164-
+------------------+-------------------------------+---------+-------+
165-
| iox::measurement | time | mytag1 | value |
166-
+------------------+-------------------------------+---------+-------+
167-
| mymeas | 2024-05-09T21:08:52.227359715 | sometag | 0.54 |
168-
| mymeas | 2024-05-09T21:08:52.229494943 | sometag | 0.55 |
169-
+------------------+-------------------------------+---------+-------+
170-
```
26+
- For v1 installation, use the [main 1.x branch](https://github.com/influxdata/influxdb/tree/master-1.x) or [install InfluxDB OSS directly](https://docs.influxdata.com/influxdb/v1/introduction/install/#installing-influxdb-oss).
27+
- For v2 installation, use the [main 2.x branch](https://github.com/influxdata/influxdb/tree/main-2.x).
28+
- **v3 development is on this main branch. This project is actively under development and is not considered stable.**
17129

172-
### Enabling HTTP authorization
173-
By default, `influxdb3` allows all HTTP requests and supports authorization via a single "all or nothing" authorization token. When the server is started with `--bearer-token` (or `INFLUXDB3_BEARER_TOKEN=...` in its environment), the client then needs to provide an `Authorization: Bearer <token>` header with each HTTP request. The server will authorize the request by calculating the SHA512 of the token and check that it matches the value specified with `--bearer-token/INFLUXDB3_BEARER_TOKEN`. The `influxdb create token` command can be used to help with this. Eg:
174-
```
175-
$ influxdb3 create token
176-
Token: apiv3_<token>
177-
Hashed Token: <sha512 of raw token>
30+
If you are interested in building from source, see the [building from source](https://github.com/influxdata/influxdb/blob/main-2.x/CONTRIBUTING.md#building-from-source) guide for contributors.
17831

179-
Start the server with `influxdb3 serve --bearer-token "<sha512 of raw token>"`
32+
To begin using InfluxDB, visit our [Getting Started with InfluxDB](https://docs.influxdata.com/influxdb/v1/introduction/get-started/) documentation.
18033

181-
HTTP requests require the following header: "Authorization: Bearer apiv3_<token>"
182-
This will grant you access to every HTTP endpoint or deny it otherwise
183-
```
18434

185-
When the server is started in this way, all HTTP requests must provide the correct authorization header. Eg:
186-
```
187-
# write
188-
$ export TOKEN="apiv3_<token>"
189-
$ influxdb3 w --dbname mydb --token "$TOKEN" -f ./file.lp
190-
success
35+
## License
36+
The open source software we build is licensed under the permissive MIT and Apache 2 licenses. We’ve long held the view that our open source code should be truly open and our commercial code should be separate and closed.
19137

192-
# perform a query with SQL using influxdb3 client
193-
$ influxdb3 q --dbname mydb --token "$TOKEN" "SELECT * from mymeas"
19438

195-
# perform a query with SQL using curl
196-
$ export URL="http://localhost:8181"
197-
$ curl -s -H "Content-Type: application/json" \
198-
-H "Authorization: Bearer $TOKEN" \
199-
-X POST "$URL/api/v3/query_sql" \
200-
--data-binary '{"db": "mydb", "q": "SELECT * from mymeas", "format": "pretty"}'
201-
```
39+
## Interested in joining the team building InfluxDB?
40+
Check out current job openings at [www.influxdata.com/careers](https://www.influxdata.com/careers) today!

assets/influxdb-logo-dark.png

3.22 KB
Loading

assets/influxdb-logo.png

10.1 KB
Loading

0 commit comments

Comments
 (0)