Skip to main content
Version: Next

MySQL CDC

MySQL CDC source connector

Support Those Engines

SeaTunnel Zeta
Flink

Description

The MySQL CDC connector allows for reading snapshot data and incremental data from MySQL database. This document describes how to set up the MySQL CDC connector to run SQL queries against MySQL databases.

Key features

Supported DataSource Info

DatasourceSupported versionsDriverUrlMaven
MySQL
  • MySQL: 5.5, 5.6, 5.7, 8.0.x
  • RDS MySQL: 5.6, 5.7, 8.0.x
  • com.mysql.cj.jdbc.Driverjdbc:mysql://localhost:3306/testhttps://mvnrepository.com/artifact/mysql/mysql-connector-java/8.0.28

    Using Dependency

    Install Jdbc Driver

    1. You need to ensure that the jdbc driver jar package has been placed in directory ${SEATUNNEL_HOME}/plugins/.

    For SeaTunnel Zeta Engine

    1. You need to ensure that the jdbc driver jar package has been placed in directory ${SEATUNNEL_HOME}/lib/.

    Creating MySQL user

    You have to define a MySQL user with appropriate permissions on all databases that the Debezium MySQL connector monitors.

    1. Create the MySQL user:
    mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
    1. Grant the required permissions to the user:
    mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
    1. Finalize the user’s permissions:
    mysql> FLUSH PRIVILEGES;

    Enabling the MySQL Binlog

    You must enable binary logging for MySQL replication. The binary logs record transaction updates for replication tools to propagate changes.

    1. Check whether the log-bin option is already on:
    mysql> show variables where variable_name in ('log_bin', 'binlog_format', 'binlog_row_image', 'gtid_mode', 'enforce_gtid_consistency');
    +--------------------------+----------------+
    | Variable_name | Value |
    +--------------------------+----------------+
    | binlog_format | ROW |
    | binlog_row_image | FULL |
    | enforce_gtid_consistency | ON |
    | gtid_mode | ON |
    | log_bin | ON |
    +--------------------------+----------------+
    5 rows in set (0.00 sec)
    1. If inconsistent with the above results, configure your MySQL server configuration file($MYSQL_HOME/mysql.cnf) with the following properties, which are described in the table below:
    # Enable binary replication log and set the prefix, expiration, and log format.
    # The prefix is arbitrary, expiration can be short for integration tests but would
    # be longer on a production system. Row-level info is required for ingest to work.
    # Server ID is required, but this will vary on production systems
    server-id = 223344
    log_bin = mysql-bin
    expire_logs_days = 10
    binlog_format = row
    # mysql 5.6+ requires binlog_row_image to be set to FULL
    binlog_row_image = FULL

    # enable gtid mode
    # mysql 5.6+ requires gtid_mode to be set to ON
    gtid_mode = on
    enforce_gtid_consistency = on
    1. Restart MySQL Server
    /etc/inint.d/mysqld restart
    1. Confirm your changes by checking the binlog status once more:

    MySQL 5.5:

    mysql> show variables where variable_name in ('log_bin', 'binlog_format', 'binlog_row_image', 'gtid_mode', 'enforce_gtid_consistency');
    +--------------------------+----------------+
    | Variable_name | Value |
    +--------------------------+----------------+
    | binlog_format | ROW |
    | log_bin | ON |
    +--------------------------+----------------+
    5 rows in set (0.00 sec)

    MySQL 5.6+:

    mysql> show variables where variable_name in ('log_bin', 'binlog_format', 'binlog_row_image', 'gtid_mode', 'enforce_gtid_consistency');
    +--------------------------+----------------+
    | Variable_name | Value |
    +--------------------------+----------------+
    | binlog_format | ROW |
    | binlog_row_image | FULL |
    | enforce_gtid_consistency | ON |
    | gtid_mode | ON |
    | log_bin | ON |
    +--------------------------+----------------+
    5 rows in set (0.00 sec)

    Notes

    Setting up MySQL session timeouts

    When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout and wait_timeout in your MySQL configuration file.

    • interactive_timeout: The number of seconds the server waits for activity on an interactive connection before closing it. See MySQL’s documentation for more details.
    • wait_timeout: The number of seconds the server waits for activity on a non-interactive connection before closing it. See MySQL’s documentation for more details.

    For more database settings see Debezium MySQL Connector

    Data Type Mapping

    Mysql Data TypeSeaTunnel Data Type
    BIT(1)
    TINYINT(1)
    BOOLEAN
    TINYINTTINYINT
    TINYINT UNSIGNED
    SMALLINT
    SMALLINT
    SMALLINT UNSIGNED
    MEDIUMINT
    MEDIUMINT UNSIGNED
    INT
    INTEGER
    YEAR
    INT
    INT UNSIGNED
    INTEGER UNSIGNED
    BIGINT
    BIGINT
    BIGINT UNSIGNEDDECIMAL(20,0)
    DECIMAL(p, s)
    DECIMAL(p, s) UNSIGNED
    NUMERIC(p, s)
    NUMERIC(p, s) UNSIGNED
    DECIMAL(p,s)
    FLOAT
    FLOAT UNSIGNED
    FLOAT
    DOUBLE
    DOUBLE UNSIGNED
    REAL
    REAL UNSIGNED
    DOUBLE
    CHAR
    VARCHAR
    TINYTEXT
    MEDIUMTEXT
    TEXT
    LONGTEXT
    ENUM
    JSON
    ENUM
    STRING
    DATEDATE
    TIME(s)TIME(s)
    DATETIME
    TIMESTAMP(s)
    TIMESTAMP(s)
    BINARY
    VARBINAR
    BIT(p)
    TINYBLOB
    MEDIUMBLOB
    BLOB
    LONGBLOB
    GEOMETRY
    BYTES

    Source Options

    NameTypeRequiredDefaultDescription
    base-urlStringYes-The URL of the JDBC connection. Refer to a case: jdbc:mysql://localhost:3306:3306/test.
    usernameStringYes-Name of the database to use when connecting to the database server.
    passwordStringYes-Password to use when connecting to the database server.
    database-namesListNo-Database name of the database to monitor.
    table-namesListYes-Table name of the database to monitor. The table name needs to include the database name, for example: database_name.table_name
    table-names-configListNo-Table config list. for example: [{"table": "db1.schema1.table1","primaryKeys": ["key1"],"snapshotSplitColumn": "key2"}]
    startup.modeEnumNoINITIALOptional startup mode for MySQL CDC consumer, valid enumerations are initial, earliest, latest and specific.
    initial: Synchronize historical data at startup, and then synchronize incremental data.
    earliest: Startup from the earliest offset possible.
    latest: Startup from the latest offset.
    specific: Startup from user-supplied specific offsets.
    startup.specific-offset.fileStringNo-Start from the specified binlog file name. Note, This option is required when the startup.mode option used specific.
    startup.specific-offset.posLongNo-Start from the specified binlog file position. Note, This option is required when the startup.mode option used specific.
    stop.modeEnumNoNEVEROptional stop mode for MySQL CDC consumer, valid enumerations are never, latest or specific.
    never: Real-time job don't stop the source.
    latest: Stop from the latest offset.
    specific: Stop from user-supplied specific offset.
    stop.specific-offset.fileStringNo-Stop from the specified binlog file name. Note, This option is required when the stop.mode option used specific.
    stop.specific-offset.posLongNo-Stop from the specified binlog file position. Note, This option is required when the stop.mode option used specific.
    snapshot.split.sizeIntegerNo8096The split size (number of rows) of table snapshot, captured tables are split into multiple splits when read the snapshot of table.
    snapshot.fetch.sizeIntegerNo1024The maximum fetch size for per poll when read table snapshot.
    server-idStringNo-A numeric ID or a numeric ID range of this database client, The numeric ID syntax is like 5400, the numeric ID range syntax is like '5400-5408'.
    Every ID must be unique across all currently-running database processes in the MySQL cluster. This connector joins the
    MySQL cluster as another server (with this unique ID) so it can read the binlog.
    By default, a random number is generated between 6500 and 2,148,492,146, though we recommend setting an explicit value.
    server-time-zoneStringNoUTCThe session time zone in database server. If not set, then ZoneId.systemDefault() is used to determine the server time zone.
    connect.timeout.msDurationNo30000The maximum time that the connector should wait after trying to connect to the database server before timing out.
    connect.max-retriesIntegerNo3The max retry times that the connector should retry to build database server connection.
    connection.pool.sizeIntegerNo20The jdbc connection pool size.
    chunk-key.even-distribution.factor.upper-boundDoubleNo100The upper bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be less than or equal to this upper bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is greater, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by sample-sharding.threshold. The default value is 100.0.
    chunk-key.even-distribution.factor.lower-boundDoubleNo0.05The lower bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be greater than or equal to this lower bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is less, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by sample-sharding.threshold. The default value is 0.05.
    sample-sharding.thresholdIntegerNo1000This configuration specifies the threshold of estimated shard count to trigger the sample sharding strategy. When the distribution factor is outside the bounds specified by chunk-key.even-distribution.factor.upper-bound and chunk-key.even-distribution.factor.lower-bound, and the estimated shard count (calculated as approximate row count / chunk size) exceeds this threshold, the sample sharding strategy will be used. This can help to handle large datasets more efficiently. The default value is 1000 shards.
    inverse-sampling.rateIntegerNo1000The inverse of the sampling rate used in the sample sharding strategy. For example, if this value is set to 1000, it means a 1/1000 sampling rate is applied during the sampling process. This option provides flexibility in controlling the granularity of the sampling, thus affecting the final number of shards. It's especially useful when dealing with very large datasets where a lower sampling rate is preferred. The default value is 1000.
    exactly_onceBooleanNofalseEnable exactly once semantic.
    formatEnumNoDEFAULTOptional output format for MySQL CDC, valid enumerations are DEFAULTCOMPATIBLE_DEBEZIUM_JSON.
    debeziumConfigNo-Pass-through Debezium's properties to Debezium Embedded Engine which is used to capture data changes from MySQL server. Schema evolution is disabled by default. You need configure debezium.include.schema.changes = true to enable it. Now we only support add columndrop columnrename column and modify column.
    common-optionsno-Source plugin common parameters, please refer to Source Common Options for details

    Task Example

    Simple

    Support multi-table reading

    env {
    parallelism = 1
    job.mode = "STREAMING"
    checkpoint.interval = 10000
    }

    source {
    MySQL-CDC {
    base-url = "jdbc:mysql://localhost:3306/testdb"
    username = "root"
    password = "root@123"
    table-names = ["testdb.table1", "testdb.table2"]

    startup.mode = "initial"
    }
    }

    sink {
    Console {
    }
    }

    Support debezium-compatible format send to kafka

    Must be used with kafka connector sink, see compatible debezium format for details

    Support custom primary key for table

    env {
    parallelism = 1
    job.mode = "STREAMING"
    checkpoint.interval = 10000
    }

    source {
    MySQL-CDC {
    base-url = "jdbc:mysql://localhost:3306/testdb"
    username = "root"
    password = "root@123"

    table-names = ["testdb.table1", "testdb.table2"]
    table-names-config = [
    {
    table = "testdb.table2"
    primaryKeys = ["id"]
    }
    ]
    }
    }

    sink {
    Console {
    }
    }

    Support schema evolution

    env {
    # You can set engine configuration here
    parallelism = 5
    job.mode = "STREAMING"
    checkpoint.interval = 5000
    read_limit.bytes_per_second=7000000
    read_limit.rows_per_second=400
    }

    source {
    MySQL-CDC {
    server-id = 5652-5657
    username = "st_user_source"
    password = "mysqlpw"
    table-names = ["shop.products"]
    base-url = "jdbc:mysql://mysql_cdc_e2e:3306/shop"
    debezium = {
    include.schema.changes = true
    }
    }
    }

    sink {
    jdbc {
    url = "jdbc:mysql://mysql_cdc_e2e:3306/shop"
    driver = "com.mysql.cj.jdbc.Driver"
    user = "st_user_sink"
    password = "mysqlpw"
    generate_sink_sql = true
    database = shop
    table = mysql_cdc_e2e_sink_table_with_schema_change_exactly_once
    primary_keys = ["id"]
    is_exactly_once = true
    xa_data_source_class_name = "com.mysql.cj.jdbc.MysqlXADataSource"
    }
    }

    Changelog

    • Add MySQL CDC Source Connector

    next version