Skip to main content
Version: Next

Redis

Redis sink connector

Description

Used to write data to Redis.

Key features

Options

nametyperequireddefault value
hoststringyes when mode=single-
portintno6379
keystringyes-
data_typestringyes-
batch_sizeintno10
userstringno-
authstringno-
db_numintno0
modestringnosingle
nodeslistyes when mode=cluster-
formatstringnojson
expirelongno-1
support_custom_keybooleannofalse
value_fieldstringno-
hash_key_fieldstringno-
hash_value_fieldstringno-
common-optionsno-

host [string]

Redis host

port [int]

Redis port

key [string]

The value of key you want to write to redis.

For example, if you want to use value of a field from upstream data as key, you can assign it to the field name.

Upstream data is the following:

codedatasuccess
200get successtrue
500internal errorfalse

If you assign field name to code and data_type to key, two data will be written to redis:

  1. 200 -> {code: 200, data: get success, success: true}
  2. 500 -> {code: 500, data: internal error, success: false}

If you assign field name to value and data_type to key, only one data will be written to redis because value is not existed in upstream data's fields:

  1. value -> {code: 500, data: internal error, success: false}

Please see the data_type section for specific writing rules.

Of course, the format of the data written here I just take json as an example, the specific or user-configured format prevails.

data_type [string]

Redis data types, support key hash list set zset

  • key

Each data from upstream will be updated to the configured key, which means the later data will overwrite the earlier data, and only the last data will be stored in the key.

  • hash

Each data from upstream will be split according to the field and written to the hash key, also the data after will overwrite the data before.

  • list

Each data from upstream will be added to the configured list key.

  • set

Each data from upstream will be added to the configured set key.

  • zset

Each data from upstream will be added to the configured zset key with a weight of 1. So the order of data in zset is based on the order of data consumption.

batch_size [int]

ensure the batch write size in single-machine mode; no guarantees in cluster mode.

user [string]

redis authentication user, you need it when you connect to an encrypted cluster

auth [string]

Redis authentication password, you need it when you connect to an encrypted cluster

db_num [int]

Redis database index ID. It is connected to db 0 by default

mode [string]

redis mode, single or cluster, default is single

nodes [list]

redis nodes information, used in cluster mode, must like as the following format:

["host1:port1", "host2:port2"]

format [string]

The format of upstream data, now only support json, text will be supported later, default json.

When you assign format is json, for example:

Upstream data is the following:

codedatasuccess
200get successtrue

Connector will generate data as the following and write it to redis:


{"code": 200, "data": "get success", "success": "true"}

expire [long]

Set redis expiration time, the unit is second. The default value is -1, keys do not automatically expire by default.

support_custom_key [boolean]

if true, the key can be customized by the field value in the upstream data.

Upstream data is the following:

codedatasuccess
200get successtrue
500internal errorfalse

You can customize the Redis key using '{' and '}', and the field name in '{}' will be parsed and replaced by the field value in the upstream data. For example, If you assign field name to {code} and data_type to key, two data will be written to redis:

  1. 200 -> {code: 200, data: get success, success: true}
  2. 500 -> {code: 500, data: internal error, success: false}

Redis key can be composed of fixed and variable parts, connected by ':'. For example, If you assign field name to code:{code} and data_type to key, two data will be written to redis:

  1. code:200 -> {code: 200, data: get success, success: true}
  2. code:500 -> {code: 500, data: internal error, success: false}

value_field [string]

The field of value you want to write to redis, data_type support key list set zset.

When you assign field name to value and value_field is data and data_type to key, for example:

Upstream data is the following:

codedatasuccess
200get successtrue

The following data will be written to redis:

  1. value -> get success

hash_key_field [string]

The field of hash key you want to write to redis, data_type support hash

hash_value_field [string]

The field of hash value you want to write to redis, data_type support hash

When you assign field name to value and hash_key_field is data and hash_value_field is success and data_type to hash, for example:

Upstream data is the following:

codedatasuccess
200get successtrue

Connector will generate data as the following and write it to redis:

The following data will be written to redis:

  1. value -> get success | true

common options

Sink plugin common parameters, please refer to Sink Common Options for details

Example

simple:

Redis {
host = localhost
port = 6379
key = age
data_type = list
}

custom key:

Redis {
host = localhost
port = 6379
key = "name:{name}"
support_custom_key = true
data_type = key
}

custom value:

Redis {
host = localhost
port = 6379
key = person
value_field = "name"
data_type = key
}

custom HashKey and HashValue:

Redis {
host = localhost
port = 6379
key = person
hash_key_field = "name"
hash_value_field = "age"
data_type = hash
}

Changelog

Change Log
ChangeCommitVersion
[Improve][Redis] Optimized Redis connection params (#8841)https://github.com/apache/seatunnel/commit/e56f06cdf0dev
[Improve] restruct connector common options (#8634)https://github.com/apache/seatunnel/commit/f3499a6eebdev
[improve] update Redis connector config option (#8631)https://github.com/apache/seatunnel/commit/f1c313eea6dev
[Feature][Redis] Flush data when the time reaches checkpoint.interval and update test case (#8308)https://github.com/apache/seatunnel/commit/e15757bcd72.3.9
Revert "[Feature][Redis] Flush data when the time reaches checkpoint interval" and "[Feature][CDC] Add 'schema-changes.enabled' options" (#8278)https://github.com/apache/seatunnel/commit/fcb29382862.3.9
[Feature][Redis] Flush data when the time reaches checkpoint.interval (#8198)https://github.com/apache/seatunnel/commit/2e24941e6a2.3.9
[Hotfix] Fix redis sink NPE (#8171)https://github.com/apache/seatunnel/commit/6b9074e7692.3.9
[Improve][dist]add shade check rule (#8136)https://github.com/apache/seatunnel/commit/51ef8000162.3.9
[Feature][Connector-Redis] Redis connector support delete data (#7994)https://github.com/apache/seatunnel/commit/02a35c39792.3.9
[Improve][Connector-V2] Redis support custom key and value (#7888)https://github.com/apache/seatunnel/commit/ef2c3c72832.3.9
[Feature][Restapi] Allow metrics information to be associated to logical plan nodes (#7786)https://github.com/apache/seatunnel/commit/6b7c53d03c2.3.9
[improve][Redis]Redis scan command supports versions 5, 6, 7 (#7666)https://github.com/apache/seatunnel/commit/6e70cbe3342.3.8
[Improve][Connector] Add multi-table sink option check (#7360)https://github.com/apache/seatunnel/commit/2489f6446b2.3.7
[Feature][Core] Support using upstream table placeholders in sink options and auto replacement (#7131)https://github.com/apache/seatunnel/commit/c4ca74122c2.3.6
[Improve][Redis] Redis reader use scan cammnd instead of keys, single mode reader/writer support batch (#7087)https://github.com/apache/seatunnel/commit/be37f05c072.3.6
[Feature][Kafka] Support multi-table source read (#5992)https://github.com/apache/seatunnel/commit/60104602d12.3.6
[Improve][Connector-V2]Support multi-table sink feature for redis (#6314)https://github.com/apache/seatunnel/commit/fed89ae3fc2.3.5
[Feature][Core] Upgrade flink source translation (#5100)https://github.com/apache/seatunnel/commit/5aabb14a942.3.4
[Feature][Connector-V2] Support TableSourceFactory/TableSinkFactory on redis (#5901)https://github.com/apache/seatunnel/commit/e84dcb8c102.3.4
[Improve][Common] Introduce new error define rule (#5793)https://github.com/apache/seatunnel/commit/9d1b2582b22.3.4
[Improve] Remove use SeaTunnelSink::getConsumedType method and mark it as deprecated (#5755)https://github.com/apache/seatunnel/commit/8de74081002.3.4
[Improve][Connector-v2][Redis] Redis support select db (#5570)https://github.com/apache/seatunnel/commit/77fbbbd0ee2.3.4
Support config column/primaryKey/constraintKey in schema (#5564)https://github.com/apache/seatunnel/commit/eac76b4e502.3.4
[Feature][Connector-v2][RedisSink]Support redis to set expiration time. (#4975)https://github.com/apache/seatunnel/commit/b5321ff1d22.3.3
Merge branch 'dev' into merge/cdchttps://github.com/apache/seatunnel/commit/4324ee19122.3.1
[Improve][Project] Code format with spotless plugin.https://github.com/apache/seatunnel/commit/423b5830382.3.1
[improve][api] Refactoring schema parse (#4157)https://github.com/apache/seatunnel/commit/b2f573a13e2.3.1
[Improve][build] Give the maven module a human readable name (#4114)https://github.com/apache/seatunnel/commit/d7cd6010512.3.1
[Improve][Project] Code format with spotless plugin. (#4101)https://github.com/apache/seatunnel/commit/a2ab1665612.3.1
[Feature][Connector] add get source method to all source connector (#3846)https://github.com/apache/seatunnel/commit/417178fb842.3.1
[Hotfix][OptionRule] Fix option rule about all connectors (#3592)https://github.com/apache/seatunnel/commit/226dc6a1192.3.0
[Improve][Connector-V2][Redis] Unified exception for redis source & sink exception (#3517)https://github.com/apache/seatunnel/commit/205f7825852.3.0
options in conditional need add to required or optional options (#3501)https://github.com/apache/seatunnel/commit/51d5bcba102.3.0
[feature][api] add option validation for the ReadonlyConfig (#3417)https://github.com/apache/seatunnel/commit/4f824fea362.3.0
[Feature][Redis Connector V2] Add Redis Connector Option Rules & Improve Redis Connector doc (#3320)https://github.com/apache/seatunnel/commit/1c10aacb302.3.0
[Connector-V2][ElasticSearch] Add ElasticSearch Source/Sink Factory (#3325)https://github.com/apache/seatunnel/commit/38254e3f262.3.0
[Improve][Connector-V2][Redis] Support redis cluster connection & user authentication (#3188)https://github.com/apache/seatunnel/commit/c7275a49cc2.3.0
[DEV][Api] Replace SeaTunnelContext with JobContext and remove singleton pattern (#2706)https://github.com/apache/seatunnel/commit/cbf82f755c2.2.0-beta
[Feature][Connector-V2] Add redis sink connector (#2647)https://github.com/apache/seatunnel/commit/71a9e4b0192.2.0-beta
[#2606]Dependency management split (#2630)https://github.com/apache/seatunnel/commit/fc047be69b2.2.0-beta
[Feature][Connector-V2] Add redis source connector (#2569)https://github.com/apache/seatunnel/commit/405f7d6f992.2.0-beta