Releases: risingwavelabs/risingwave
v1.3.0
For installation and running instructions, see Get started.
Main changes
SQL features
- SQL commands
- SQL functions & operators
- Supports
array_min
. #12071 - Supports
array_max
. #12100 - Supports
array_sort
. #12189 - Supports
array_sum
. #12162 format
function supports variable inputs. #12178- Regular expression functions support back reference, positive, negative lookahead, and positive, negative lookbehind. #12329
- Supports
||
operator for concatenating JSONB data. #12502 - Supports
bool_and
andbool_or
in materialized views. #11956
- Supports
- Query syntax:
- Supports
WITH ORDINALITY
clause. #12273
- Supports
- System catalog
Sources & sinks
- Generated columns defined with non-deterministic functions cannot be part of the primary key. #12181
- Adds new
properties.enable.auto.commit
parameter for the Kafka consumer, which sets theenable.auto.commit
parameter for the Kafka client. #12223 - Adds
privatelink.endpoint
parameter to the WITH clause, to support private link for Kafka connector on GCP and AWS. #12266 - Adds parameters
[message.timeout.ms](http://message.timeout.ms)
andmax.in.flight.requests.per.connection
for Kafka sources. #12574 - Allows Kinesis source to start ingesting data from a specific timestamp.
sequence_number
is no longer supported as a startup mode option. #12241 - Allow optional
FORMAT DEBEZIUM ENCODE JSON
after the connector definition of CDC tables. Allow optionalFORMAT NATIVE ENCODE NATIVE
after the connector definition of Nexmark sources or tables. #12306 - Allows multiple URLs when defining schema registries. #11982
- Adds support for sinking data to versions 7 and 8 of Elasticsearch. #10357, https://github.com/risingwavelabs/risingwave/pull/10415https://github.com/risingwavelabs/risingwave-docs/issues/1303
- Adds support for sinking append-only data to the NATS messaging system. #11924
- Adds support for sinking data to Doris. #12336
- Adds support for sinking data to Apache Pulsar. #12286
- Adds support for sinking data to Cassandra and ScyllaDB. #11878
- Adds support for creating upsert Iceberg sinks. #12576
- Supports specifying the
sink_decouple
session variable asdefault
,true
andenable
, orfalse
anddisable
. #12544 - A
varchar
column in RisingWave can sink into auuid
column in Postgres. #12704 - New syntaxes for specifying data format and data encode when creating a Kafka, Kinesis, and Pulsar sink. #12556
Administration & observability
Adds information_schema.views
, which contains information about views defined in the database.
Full Changelog: v1.2.0...v1.3.0
v1.2.0
For installation and running instructions, see Get started.
Main changes
SQL features
-
SQL commands:
-
Breaking change: Syntax of emit-on-window-close has changed. If your application contains integration code, please update your code accordingly. #11363
In v1.1:CREATE MATERIALIZED VIEW mv EMIT ON WINDOW CLOSE AS SELECT ...;
In v1.2 and onwards:
CREATE MATERIALIZED VIEW mv AS SELECT ... EMIT ON WINDOW CLOSE;
-
Privileges for tables can now be granted or revoked. #11725
-
The default DISTRIBUTED BY columns have been changed from the whole index columns into the first index column. #11865
-
Supports
ALTER SOURCE ADD COLUMN
. #11350 -
Supports
SHOW JOBS
andCANCEL JOBS
, with which you can show the in-progress streaming jobs and cancel jobs by their IDs. #11854 -
Supports [I]LIKE in SHOW commands. #11791
-
-
SQL functions & operators
- Supports lambda functions via
array_transform
. #11888 #11937 to_date()
#11241- The
to_char()
function now supportstimestamptz
input. #11778 scale
,min_scale
, andtrim_scale
#11663- Supports
regexp_replace
. #11819 - Supports
regexp_count
. #11975 - Supports
[NOT] ILIKE
expressions. #11743 - Adds support for
[!]~~[*]
operators. They’ll be parsed to[NOT] [I]LIKE
expressions. #11748 - Supports
IS JSON
predicate. #11831
- Supports lambda functions via
-
Query syntax:
-
System catalog
-
Adds support for transactions for single-table CDC data. #11453
Sources & sinks
- Adds a new parameter
schema.registry.name.strategy
to the Kafka connector, with with you can specify naming strategies for schema registries. #11384 - Breaking Change: Implements a Rust-native Iceberg sink connector to improve stability and performance. The connector introduces new parameters. Applications that rely on the previous version of the feature (specifically, the version included in RisingWave v1.0.0 and v1.1) may no longer function correctly. To restore functionality to your applications, please carefully review the syntax and parameters outlined on this page and make any necessary revisions to your code. Please refer to Sink data to Iceberg for details. #11326
- Adds support for sinking data to ClickHouse. #11240
- Experimental: An enhancement has been made to the mysql-cdc connector to improve data ingestion performance. It achieves so by optimizing the data backfilling logic for CDC tables. This feature is not enabled by default. To enable it, run this command:
SET cdc_backfill="true";
#11707 - Adds a parameter
client.id
for Kafka sources. #11911
Deployment
- Supports HDFS as the storage backend for deployments via Docker Compose. #11632
Administration & observability
- Adds a new system parameter
max_concurrent_creating_streaming_jobs
, with which users can specify the maximum number of streaming jobs that can be created concurrently. #11601 - Improves the calculation logic of the Mem Table Size (Max) metric in RisingWave Dashboard. #11442
- Adds new metrics to RisingWave Dashboard:
Full Changelog: v1.1.0...v1.2.0
v1.1.4
release v1.1.4
v1.1.3
release v1.1.3
v1.1.2
release v1.1.2
v1.1.1
release v1.1.1
v1.1.0
For installation and running instructions, see Get started.
Main changes
SQL features
-
SQL commands:
-
DROP
commands now support theCASCADE
option, which drops the specified item and all its dependencies. #11250 -
CREATE TABLE
now supports theAPPEND ONLY
clause, allowing the definition of watermark columns on the table. #11233 -
Supports new commands
START TRANSACTION
,BEGIN
, andCOMMIT
for read-only transactions. #10735 -
Supports
SHOW CLUSTER
to show the details of your RisingWave cluster, including the address of the cluster, its state, the parallel units it is using, and whether it's streaming data, serving data or unschedulable. #10656, #10932
-
-
SQL functions:
-
Supports new window functions:
lead()
andlag()
. #10915 -
Supports new aggregate functions:
first_value()
andlast_value()
, which retrieve the first and last values within a specific ordering from a set of rows. #10740 -
Supports the
grouping()
function to determine if a column or expression in theGROUP BY
clause is part of the current grouping set or not. #11006 -
Supports the
set_config()
system administration function. #11147 -
Supports the
sign()
mathematical function. #10819 -
Supports
string_agg()
withDISTINCT
andORDER BY
, enabling advanced string concatenation with distinct values and custom sorting. #10864 -
Supports the co-existence of
string_agg()
and other aggregations withDISTINCT
. #10864 -
Supports the
zone_string
parameter in thedate_trunc()
,extract()
, anddate_part()
functions, ensuring compatibility with PostgreSQL. #10480- Breaking change: Previously, when the input for
date_trunc
was actually a date, the function would cast it to a timestamp and record the choice in the query plan. However, after this release, new query plans will cast the input totimestamptz
instead. As a result, some old SQL queries, especially those saved as views, may fail to bind correctly and require type adjustments. It's important to note that old query plans will still continue working because the casting choice is recorded with a cast to timestamp.
Before this release:
```sql SELECT date_trunc('month', date '2023-03-04'); date_trunc --------------------------- 2023-03-01 00:00:00 (1 row) ```
After this release:
```sql SELECT date_trunc('month', date '2023-03-04'); date_trunc --------------------------- 2023-03-01 00:00:00+00:00 (1 row) ```
Now, the result of
date_trunc
includes the timezone offset (+00:00
) in the output, making it consistent with the behavior in PostgreSQL. - Breaking change: Previously, when the input for
-
round()
now accepts a negative value and rounds it to the left of the decimal point. #10961 -
to_timestamp()
now returnstimestamptz
. #11018
-
-
Query clauses
-
SELECT
now supports theEXCEPT
clause which excludes specific columns from the result set. #10438, #10723 -
SELECT
now supports theGROUPING SETS
clause which allows users to perform aggregations on multiple levels of grouping within a single query. #10807 -
Supports index selection for temporal joins. #11019
-
Supports
CUBE
in group-by clauses to generate multiple grouping sets. #11262
-
-
Patterns
- Supports multiple rank function calls in TopN by group. #11149
-
System catalog
- Supports querying
created_at
andinitialized_at
from RisingWave relations such as sources, sinks, and tables in RisingWave catalogs. #11199
- Supports querying
Connectors
-
Supports specifying Kafka parameters when creating a source or sink. #11203
-
JDBC sinks used for upserts must specify the downstream primary key via the
primary_key
option. #11042 -
access_key
and its correspondingsecret_key
are now mandatory for all AWS authentication components. #11120
Full Changelog: v1.0.0...v1.1.0
v1.0.0
For installation and running instructions, see Get started.
Main changes
SQL features
-
SQL command:
-
SQL function:
-
Adds the
current_setting()
function to get the current value of a configuration parameter. #10051 -
Adds new array functions:
array_position()
,array_replace()
,array_ndims()
,array_lower()
,array_upper()
,array_length()
, andarray_dims()
. #10166, #10197 -
Adds new aggregate functions:
percentile_cont()
,percentile_disc()
, andmode()
. #10252 -
Adds new system functions:
user()
,current_user()
, andcurrent_role()
. #10366 -
Adds new string functions:
left()
andright()
. #10765 -
Adds new bytea functions:
octet_length()
andbit_length()
. #10462 -
array_length()
andcardinality()
return integer instead of bigint. #10267 -
Supports the
row_number
window function that doesn't match the TopN pattern. #10869
-
-
User-defined function:
-
System catalog:
-
Supports
GROUP BY
output alias or index. #10305 -
Supports using scalar functions in the
FROM
clause. #10317 -
Supports tagging the created VPC endpoints when creating a PrivateLink connection. #10582
Connectors
-
Breaking change: When creating a source or table with a connector whose schema is auto-resolved from an external format file, the syntax for defining primary keys within column definitions is replaced with the table constraint syntax. #10195
CREATE TABLE debezium_non_compact (order_id int PRIMARY KEY) WITH ( connector = 'kafka', kafka.topic = 'debezium_non_compact_avro_json', kafka.brokers = 'message_queue:29092', kafka.scan.startup.mode = 'earliest' ) ROW FORMAT DEBEZIUM_AVRO ROW SCHEMA LOCATION CONFLUENT SCHEMA REGISTRY 'http://message_queue:8081';
CREATE TABLE debezium_non_compact (PRIMARY KEY(order_id)) WITH ( ...
-
Breaking change: Modifies the syntax for specifying data and encoding formats for a source in
CREATE SOURCE
andCREATE TABLE
commands. For v1.0.0, the old syntax is still accepted but will be deprecated in the next release. #10768Old syntax - part 1:
ROW FORMAT data_format [ MESSAGE 'message' ] [ ROW SCHEMA LOCATION ['location' | CONFLUENT SCHEMA REGISTRY 'schema_registry_url' ] ];
New syntax - part 1:
FORMAT data_format ENCODE data_encode ( message = 'message', schema_location = 'location' | confluent_schema_registry = 'schema_registry_url' );
Old syntax - part 2:
ROW FORMAT csv WITHOUT HEADER DELIMITED BY ',';
New syntax - part 2:
FORMAT PLAIN ENCODE CSV ( without_header = 'true', delimiter = ',' );
-
Supports sinking data to AWS Kinesis. #10437
-
Supports
BYTES
as a row format. #10592 -
Supports specifying schema for the PostgreSQL sink. #10576
-
Supports using the user-provided publication to create a PostgreSQL CDC table. #10804
Full Changelog: v0.19.0...v1.0.0
v0.19.3
release v0.19.3
v0.19.2
release v0.19.2