Snowflake
JDBC Snowflake Source Connector
Support those enginesâ
Spark
Flink
SeaTunnel ZetaKey featuresâ
supports query SQL and can achieve projection effect.
Descriptionâ
Read external data source data through JDBC.
Supported DataSource listâ
datasource | supported versions | driver | url | maven |
---|---|---|---|---|
snowflake | Different dependency version has different driver class. | net.snowflake.client.jdbc.SnowflakeDriver | jdbc:snowflake://<account_name>.snowflakecomputing.com | Download |
Database dependencyâ
Please download the support list corresponding to 'Maven' and copy it to the '$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory
For example Snowflake datasource: cp snowflake-connector-java-xxx.jar $SEATNUNNEL_HOME/plugins/jdbc/lib/Data Type Mappingâ
Snowflake Data type | SeaTunnel Data type |
---|---|
BOOLEAN | BOOLEAN |
TINYINT SMALLINT BYTEINT | SHORT_TYPE |
INT INTEGER | INT |
BIGINT | LONG |
DECIMAL NUMERIC NUMBER | DECIMAL(x,y) |
DECIMAL(x,y)(Get the designated column's specified column size.>38) | DECIMAL(38,18) |
REAL FLOAT4 | FLOAT |
DOUBLE DOUBLE PRECISION FLOAT8 FLOAT | DOUBLE |
CHAR CHARACTER VARCHAR STRING TEXT VARIANT OBJECT | STRING |
DATE | DATE |
TIME | TIME |
DATETIME TIMESTAMP TIMESTAMP_LTZ TIMESTAMP_NTZ TIMESTAMP_TZ | TIMESTAMP |
BINARY VARBINARY | BYTES |
GEOGRAPHY (WKB or EWKB) GEOMETRY (WKB or EWKB) | BYTES |
GEOGRAPHY (GeoJSON, WKT or EWKT) GEOMETRY (GeoJSON, WKB or EWKB) | STRING |
Optionsâ
name | type | required | default | description |
---|---|---|---|---|
url | String | Yes | - | The URL of the JDBC connection. Refer to a case: jdbc:snowflake://<account_name>.snowflakecomputing.com |
driver | String | Yes | - | The jdbc class name used to connect to the remote data source, if you use Snowflake the value is net.snowflake.client.jdbc.SnowflakeDriver . |
user | String | No | - | Connection instance user name |
password | String | No | - | Connection instance password |
query | String | Yes | - | Query statement |
connection_check_timeout_sec | Int | No | 30 | The time in seconds to wait for the database operation used to validate the connection to complete |
partition_column | String | No | - | The column name for parallelism's partition, only support numeric type,Only support numeric type primary key, and only can config one column. |
partition_lower_bound | BigDecimal | No | - | The partition_column min value for scan, if not set SeaTunnel will query database get min value. |
partition_upper_bound | BigDecimal | No | - | The partition_column max value for scan, if not set SeaTunnel will query database get max value. |
partition_num | Int | No | job parallelism | The number of partition count, only support positive integer. default value is job parallelism |
fetch_size | Int | No | 0 | For queries that return a large number of objects,you can configure the row fetch size used in the query toimprove performance by reducing the number database hits required to satisfy the selection criteria. Zero means use jdbc default value. |
properties | Map | No | - | Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. |
common-options | No | - | Source plugin common parameters, please refer to Source Common Options for details |
tipsâ
If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed in parallel according to the concurrency of tasks.
JDBC Driver Connection Parameters are supported in JDBC connection string. E.g, you can add
?GEOGRAPHY_OUTPUT_FORMAT='EWKT'
to specify the Geospatial Data Types. For more information about configurable parameters, and geospatial data types please visit Snowflake official document
Task Exampleâ
simple:â
This example queries type_bin 'table' 16 data in your test "database" in single parallel and queries all of its fields. You can also specify which fields to query for final output to the console.
# Defining the runtime environment
env {
parallelism = 2
job.mode = "BATCH"
}
source{
Jdbc {
url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
driver = "net.snowflake.client.jdbc.SnowflakeDriver"
connection_check_timeout_sec = 100
user = "root"
password = "123456"
query = "select * from type_bin limit 16"
}
}
transform {
# If you would like to get more information about how to configure seatunnel and see full list of transform plugins,
# please go to https://seatunnel.apache.org/docs/transform-v2/sql
}
sink {
Console {}
}
parallel:â
Read your query table in parallel with the shard field you configured and the shard data You can do this if you want to read the whole table
Jdbc {
url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
driver = "net.snowflake.client.jdbc.SnowflakeDriver"
connection_check_timeout_sec = 100
user = "root"
password = "123456"
# Define query logic as required
query = "select * from type_bin"
# Parallel sharding reads fields
partition_column = "id"
# Number of fragments
partition_num = 10
}
parallel boundary:â
It is more efficient to specify the data within the upper and lower bounds of the query It is more efficient to read your data source according to the upper and lower boundaries you configured
Jdbc {
url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
driver = "net.snowflake.client.jdbc.SnowflakeDriver"
connection_check_timeout_sec = 100
user = "root"
password = "123456"
# Define query logic as required
query = "select * from type_bin"
partition_column = "id"
# Read start boundary
partition_lower_bound = 1
# Read end boundary
partition_upper_bound = 500
partition_num = 10
}