Releases: risingwavelabs/risingwave
v1.0.0
For installation and running instructions, see Get started.
Main changes
SQL features
-
SQL command:
-
SQL function:
-
Adds the
current_setting()function to get the current value of a configuration parameter. #10051 -
Adds new array functions:
array_position(),array_replace(),array_ndims(),array_lower(),array_upper(),array_length(), andarray_dims(). #10166, #10197 -
Adds new aggregate functions:
percentile_cont(),percentile_disc(), andmode(). #10252 -
Adds new system functions:
user(),current_user(), andcurrent_role(). #10366 -
Adds new string functions:
left()andright(). #10765 -
Adds new bytea functions:
octet_length()andbit_length(). #10462 -
array_length()andcardinality()return integer instead of bigint. #10267 -
Supports the
row_numberwindow function that doesn't match the TopN pattern. #10869
-
-
User-defined function:
-
System catalog:
-
Supports
GROUP BYoutput alias or index. #10305 -
Supports using scalar functions in the
FROMclause. #10317 -
Supports tagging the created VPC endpoints when creating a PrivateLink connection. #10582
Connectors
-
Breaking change: When creating a source or table with a connector whose schema is auto-resolved from an external format file, the syntax for defining primary keys within column definitions is replaced with the table constraint syntax. #10195
CREATE TABLE debezium_non_compact (order_id int PRIMARY KEY) WITH ( connector = 'kafka', kafka.topic = 'debezium_non_compact_avro_json', kafka.brokers = 'message_queue:29092', kafka.scan.startup.mode = 'earliest' ) ROW FORMAT DEBEZIUM_AVRO ROW SCHEMA LOCATION CONFLUENT SCHEMA REGISTRY 'http://message_queue:8081';
CREATE TABLE debezium_non_compact (PRIMARY KEY(order_id)) WITH ( ...
-
Breaking change: Modifies the syntax for specifying data and encoding formats for a source in
CREATE SOURCEandCREATE TABLEcommands. For v1.0.0, the old syntax is still accepted but will be deprecated in the next release. #10768Old syntax - part 1:
ROW FORMAT data_format [ MESSAGE 'message' ] [ ROW SCHEMA LOCATION ['location' | CONFLUENT SCHEMA REGISTRY 'schema_registry_url' ] ];
New syntax - part 1:
FORMAT data_format ENCODE data_encode ( message = 'message', schema_location = 'location' | confluent_schema_registry = 'schema_registry_url' );Old syntax - part 2:
ROW FORMAT csv WITHOUT HEADER DELIMITED BY ',';New syntax - part 2:
FORMAT PLAIN ENCODE CSV ( without_header = 'true', delimiter = ',' ); -
Supports sinking data to AWS Kinesis. #10437
-
Supports
BYTESas a row format. #10592 -
Supports specifying schema for the PostgreSQL sink. #10576
-
Supports using the user-provided publication to create a PostgreSQL CDC table. #10804
Full Changelog: v0.19.0...v1.0.0
v0.19.3
release v0.19.3
v0.19.2
release v0.19.2
v0.19.1
release v0.19.1
v0.19.0
For installation and running instructions, see Get started.
Main changes
Installation
- Now, you can easily install RisingWave on your local machine with Homebrew by running
brew install risingwave.
Administration
- Adds the
pg_indexesanddattablespacesystem catalogs. #9844, #9822 - Now, the
SHOW PARAMETERScommand will display the mutability of each system parameter. #9526
SQL features
- Experimental features: Adds support for 256-bit integer. #9146, #9184, #9186, #9191, #9217
- Indexes can be created on expressions. #9142
- Adds support for expressions in aggregate function arguments. #9955
- Adds support for
VALUESclause. #8751 - Adds support for generated columns, which are special columns computed from other columns in a table or source. #8700, #9580
- Adds support for using expressions in the
ORDER BYandPARTITION BYclauses. #9827 - New SQL commands
CREATE CONNECTIONandSHOW CONNECTIONS: Creates an AWS PrivateLink connection and show all available connections. #8907DROP CONNECTION: Drops a connection. #9128SHOW FUNCTIONS: Shows existing user-defined functions. #9398DROP FUNCTIONS: Drops a user-defined function. #9561SHOW CREATE SOURCEandSHOW CREATE SINK: Shows the SQL statement used to create a source or sink. #9083SHOW INDEXES: Shows all indexes on a particular table. #9835
- SQL functions
- Adds support for trigonometric functions. #8838, #8918, #9064, #9203, #9259
- Adds support for degrees and radians functions. #9108
- Adds support for the
lag()andlead()window functions and theOVER()andEMIT ON WINDOW CLOSEclause. #9597, #9622, #9701 - Adds support for new aggregate functions, including
bool_and,bool_or,jsonb_agg, andjsonb_object_agg. #9452 - Adds support for
max(),min(), andcount()for timestamptz data. #9165 - Adds support for microseconds and milliseconds for
to_char()andto_timestamp(). #9257 - Adds support for multibyte Unicode in
overlay()andascii()functions. #9321 - Adds support for the
string_to_array()function. #9289 - Adds support for
array_positions(). #9152 - Adds support for
cardinality(). #8867 - Adds support for
array_remove(). #9116 - Adds support for
trim_array(). #9265 - Adds support for array range access. #9362
- Adds support for JSONB in UDF. #9103
- Adds support for
btrim()and updatestrim()to PostgreSQL standard syntax. #8985 - Adds support for
date_part(). #8830 - Expands
extract()with more fields. #8830 - Adds support for
proctime(), which returns the system time with time zone when a record is processed. #9088 - Adds support for
translate(),@(), andceiling(). #8998 - Adds support for
encode()anddecode(). #9351 - Adds support for the
intersectoperator. #9573 - Adds support for the default escape
\in alikeexpression. #9624 - Adds support for the
IS [NOT] UNKNOWNcomparison predicate. #9965 - Adds support for the
starts_with()string function and^@. #9967 - Adds support for unary
trunc,ln,log10(log),exp,cbrt(||/) mathematical functions. #9991
Connectors
- Adds support for ingesting CDC data from TiDB and sinking data to TiDB with the JDBC connector. #8708
- Adds support for ingesting CDC data from Citus. #8988
- Adds support for loading Pulsar secret key file from AWS S3. #8428, #8222
- Adds support for using an established AWS PrivateLink connection in a
CREATE SOURCE,CREATE TABLE, orCREATE SINKstatement for a Kafka source/sink. #9119, #9128, #9728, #9263 - Deprecates the
use_transactionfield in the Kafka sink connector. #9207 - Adds support for zstd compression type for Kafka connector. #9297
- Deprecates the
upsertproperty in the Kafka connector as it can be inferred from the row format. #9457 - Adds a new field
properties.sync.call.timeoutin the WITH clause of the Kafka source connector to control the timeout. #9005 - Adds support for a new row format
DEBEZIUM_MONGO_JSONin the Kafka source connector. #9250 - Adds CSV format support for the Kafka source connector. #9875
Cluster configuration changes
--data_directoryand--state_storemust be specified on CLI of the meta node, or the cluster will fail to start. #9170- Clusters will refuse to start if the specified object store URL identified by
state_storeanddata_directoryis occupied by another instance. Do not share the object store URL between multiple clusters. #9642
Full Changelog: v0.18.0...v0.19.0
v0.18.0
For installation and running instructions, see Get started.
Main changes
Starting from this version, we’ll respect semantic versioning conventions by using the middle number (y , instead of z, in x.y.z) to indicate minor versions. That is why this is v0.18.0, not v0.1.18.
Administration and troubleshooting
- Improves error messages by including the location of the statement in question. #8646
- Initial values of immutable system parameters can be specified via the meta-node command line. The initial values provided in the configuration file will be ignored. #8366
SQL features
- Adds initial support for user-defined functions. #8597 #8644 #8255 #7943
- Adds support for JSONB data type. #8256 #8181
- Adds support for
NULLS { FIRST | LAST }inORDER BYclauses. #8485 - New commands:
- New functions:
array_length: Returns the length of an array. #8636- String functions implemented with the help of chatGPT. #8767 #8839
chr(integer)-> varcharstarts_with(varchar, varchar)-> booleaninitcap(varchar)-> varcharlpad(varchar, integer)-> varcharlpad(varchar, integer, varchar)-> varcharrpad(varchar, integer)-> varcharrpad(varchar, integer, varchar)-> varcharreverse(varchar)-> varcharstrpos(varchar, varchar)-> integerto_ascii(varchar)-> varcharto_hex(integer)-> varcharto_hex(bigint)-> varchar)
- Improves the data type values of columns returned by
DESCRIBE. #8819 UPDATEcommands cannot update primary key columns. #8569- Adds support for microsecond precision for intervals. #8501
- Adds an optional parameter
offsettotumble()andhop()functions. #8490 - Data records that has null time values will be ignored by time window functions. #8146
- Improves the behaviors of the
expoperator when the operand is too large or small. #8309 - Supports process time temporal join, which enables the joining of an append-only stream (such as Kafka) with a temporal table (e.g. a materialized view backed by MySQL CDC). This feature ensures that any updates made to the temporal table will not affect previous results obtained from the temporal join. Supports
FOR SYSTEM_TIME AS OF NOW()syntax to express process time temporal join. #8480
Connectors
- Adds a new field
basetimeto the load generator connector for generating timestamp data. The load generator will take this field asnowand generates data accordingly. #8619 - Empty cells in CSV are now parsed as null. #8709
- Adds the Iceberg connector. #8508
- Adds support for the upsert type to the Kafka sink connector. #8168
- Removes the message name parameter for Avro data. #8124
- Adds support for AWS PrivateLink for Kafka source connector. #8247
Full Changelog: v0.1.17...v0.18.0
v0.1.17
For installation and running instructions, see Get started.
Main changes
Administration
- Adds a system catalog view
rw_catalog.rw_ddl_progress, with which users can view the progress of aCREATE INDEX,CREATE SINK, orCREATE MATERIALIZED VIEWstatement. #7914 - Adds the
pg_conversionandpg_enumsystem catalogs. #7964, #7706
SQL features
- Adds the
exp()function. #7971 - Adds the
pow()function. #7789 - Adds support for displaying primary keys in
EXPLAINstatements. #7590 - Adds support for descending order in
CREATE INDEXstatements. #7822 - Adds
SHOW PARAMETERSandALTER SYSTEMcommands to display and update system parameters. #7882, #7913
Connectors
- Adds a new parameter
match_patternto the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see Ingest data from S3 buckets. #7565 - Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see Ingest data from PostgreSQL CDC. [#6869](#6869, #7133
- Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see Ingest data from MySQL CDC. #6689, #6345, #6481, #7133
- Adds the JDBC sink connector, with which users can sink data to MySQL, PostgreSQL, or other databases that are compliant with JDBC. #6493
- Add new parameters to the Kafka sink connector.
Full Changelog: v0.1.16...v0.1.17
v0.1.16
For installation and running instructions, see Get started.
Main changes
Administration
- Adds support for aborting a query in local mode with
Ctrl + C. #7444
SQL features
- Adds support for the
to_timestampfunction. #7060 - Adds support for the
RETURNINGclause in DML statements. #7094 - Breaking change: Deprecates
CREATE MATERIALIZED SOURCE. To create a materialized source, create a table and include the newly added connector settings. #7281, #7110 - Adds support for the
candiflags inregex_match()andregex_matches()functions. #7135 - Adds support for
SHOW CREATE TABLE. You can use this statement to show the definition of a table. #7152 - Adds support for the
pg_stat_activitysystem catalog and several system functions. #7274 - Adds the
_rw_kafka_timestampparameter to show the timestamps of Kafka messages. Users can now specify the scope of Kafka messages by timestamps. #7275, #7150 - Adds support for displaying PostgreSQL and RisingWave versions in
version(). #7314 - Adds support for displaying internal tables using the
SHOW INTERNAL TABLESstatement. #7348 - Adds support for
SET VISIBILITY_MODEYou can use this session variable to configure whether only checkpoint data is readable for batch query. #5850 - Adds support for
SET STREAMING_PARALLELISM. You can use this session variable to configure parallelism for streaming queries. #7370
Connectors
- Adds support for generating array and struct data using the datagen connector. #7099
- Adds the S3 source connector, with which users can ingest data in CSV format from S3 locations. For data ingestion from files, CSV is the only supported format and the files must be placed on S3. #6846
Full Changelog: v0.1.15...v0.1.16
v0.1.15
For installation and running instructions, see Get started.
Main changes
Installation and deployment
- Parallelism and available memory of compute nodes are now command-line arguments and removed from the configuration file. #6767
- The default barrier interval is set to 1 second. #6553
- Adds support for meta store backup and recovery. #6737
SQL features
- Adds support for
SHOW CREATE MATERIALIZED VIEWandSHOW CREATE VIEWto show how materialized and non-materialized views are defined. #6921 - Adds support for
CREATE TABLE IF NOT EXISTS. #6643 - A sink can be created from a SELECT query. #6648
- Adds support for struct casting and comparison. #6552
- Adds pg_catalog views and system functions. #6982
- Adds support for
CREATE TABLE AS. #6798 - Ads the initial support for batch query on Kafka source. #6474
- Adds support for
SET QUERY_EPOCHto query historical data based on meta backup. #6840
Connectors
- Improves the handling of schema errors for Avro and Protobuf data. #6821
- Adds two options to the datagen connector to make it possible to generate increasing timestamp values. #6591
Observability
- Adds metrics for the backup manager in Grafana. #6898
- RisingWave Dashboard can now fetch data from Prometheus and visualize it in charts. #6602
Full Changelog: v0.1.14...v0.1.15
v0.1.14
For installation and running instructions, see Get started.
Main changes
SQL features
PRIMARY KEYconstraint checks can be performed on materialized sources and tables but not on non-materialized sources. For tables or materialized sources that enabledPRIMARY KEYconstraints, if you insert data to an existing key, the new data will overwrite the old data. #6320 #6435- Adds support for timestamp with time zone data type. You can use this data type in time window functions, and convert between it and timestamp (without time zone). #5855 #5910 #5968
- Adds support for
UNIONandUNION ALLoperators. #6363 #6397 - Implements the
rank()function to support a different mode of Top-N queries. #6383 - Adds support for logical views (
CREATE VIEW). #6023 - Adds the
data_trunc()function. #6365 - Adds the system catalog schema. #6227
- Displays error messages when users enter conflicting or redundant command options. #5933
Connectors
- Adds support for the Maxwell Change Data Capture (CDC) format. #6057
- Protobuf schema files can be loaded from Web locations in
s3://,http://, orhttps://formats. #6114 #5964 - Adds support for Confluent Schema Registry for Kafka data in Avro and Protobuf formats. #6289
- Adds two options to the Kinesis connector. Users can specify the startup mode and optionally the sequence number to start with. #6317
Full Changelog: v0.1.13...v0.1.14