SftpFile
Sftp file source connector
Support Those Engines
Spark
Flink
SeaTunnel Zeta
Key Features
- batch
- stream
- exactly-once
- column projection
- parallelism
- support user-defined split
-  file format type- text
- csv
- json
- excel
 
Description
Read data from sftp file server.
Supported DataSource Info
In order to use the SftpFile connector, the following dependencies are required. They can be downloaded via install-plugin.sh or from the Maven central repository.
| Datasource | Supported Versions | Dependency | 
|---|---|---|
| SftpFile | universal | Download | 
If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.
If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.
We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to Sftp and this connector need some hadoop dependencies. It only supports hadoop version 2.9.X+.
Data Type Mapping
The File does not have a specific type list, and we can indicate which SeaTunnel data type the corresponding data needs to be converted to by specifying the Schema in the config.
| SeaTunnel Data type | 
|---|
| STRING | 
| SHORT | 
| INT | 
| BIGINT | 
| BOOLEAN | 
| DOUBLE | 
| DECIMAL | 
| FLOAT | 
| DATE | 
| TIME | 
| TIMESTAMP | 
| BYTES | 
| ARRAY | 
| MAP | 
Source Options
| Name | Type | Required | default value | Description | 
|---|---|---|---|---|
| host | String | Yes | - | The target sftp host is required | 
| port | Int | Yes | - | The target sftp port is required | 
| user | String | Yes | - | The target sftp username is required | 
| password | String | Yes | - | The target sftp password is required | 
| path | String | Yes | - | The source file path. | 
| file_format_type | String | Yes | - | Please check #file_format_type below | 
| file_filter_pattern | String | No | - | Filter pattern, which used for filtering files. | 
| delimiter/field_delimiter | String | No | \001 | delimiter parameter will deprecate after version 2.3.5, please use field_delimiter instead. Field delimiter, used to tell connector how to slice and dice fields when reading text files. Default \001, the same as hive's default delimiter | 
| parse_partition_from_path | Boolean | No | true | Control whether parse the partition keys and values from file path For example if you read a file from path oss://hadoop-cluster/tmp/seatunnel/parquet/name=tyrantlucifer/age=26Every record data from file will be added these two fields: name age tyrantlucifer 26 Tips: Do not define partition fields in schema option | 
| date_format | String | No | yyyy-MM-dd | Date type format, used to tell connector how to convert string to date, supported as the following formats: yyyy-MM-ddyyyy.MM.ddyyyy/MM/dddefault yyyy-MM-dd | 
| datetime_format | String | No | yyyy-MM-dd HH:mm:ss | Datetime type format, used to tell connector how to convert string to datetime, supported as the following formats: yyyy-MM-dd HH:mm:ssyyyy.MM.dd HH:mm:ssyyyy/MM/dd HH:mm:ssyyyyMMddHHmmssdefault yyyy-MM-dd HH:mm:ss | 
| time_format | String | No | HH:mm:ss | Time type format, used to tell connector how to convert string to time, supported as the following formats: HH:mm:ssHH:mm:ss.SSSdefault HH:mm:ss | 
| skip_header_row_number | Long | No | 0 | Skip the first few lines, but only for the txt and csv. For example, set like following: skip_header_row_number = 2then SeaTunnel will skip the first 2 lines from source files | 
| read_columns | list | no | - | The read column list of the data source, user can use it to implement field projection. | 
| sheet_name | String | No | - | Reader the sheet of the workbook,Only used when file_format is excel. | 
| schema | Config | No | - | Please check #schema below | 
| compress_codec | String | No | None | The compress codec of files and the details that supported as the following shown: - txt: lzoNone- json: lzoNone- csv: lzoNone- orc: lzosnappylz4zlibNone- parquet: lzosnappylz4gzipbrotlizstdNoneTips: excel type does Not support any compression format | 
| common-options | No | - | Source plugin common parameters, please refer to Source Common Options for details. | 
file_format_type [string]
File type, supported as the following file types:
text csv parquet orc json excel
If you assign file type to json, you should also assign schema option to tell connector how to parse data to the row you want.
For example:
upstream data is the following:
{"code":  200, "data":  "get success", "success":  true}
You can also save multiple pieces of data in one file and split them by newline:
{"code":  200, "data":  "get success", "success":  true}
{"code":  300, "data":  "get failed", "success":  false}
you should assign schema as the following:
schema {
    fields {
        code = int
        data = string
        success = boolean
    }
}
connector will generate data as the following:
| code |    data     | success |
|------|-------------|---------|
| 200  | get success | true    |
If you assign file type to parquet orc, schema option not required, connector can find the schema of upstream data automatically.
If you assign file type to text csv, you can choose to specify the schema information or not.
For example, upstream data is the following:
tyrantlucifer#26#male
If you do not assign data schema connector will treat the upstream data as the following:
|        content        |
|-----------------------|
| tyrantlucifer#26#male |
If you assign data schema, you should also assign the option field_delimiter too except CSV file type
you should assign schema and delimiter as the following:
field_delimiter = "#"
schema {
    fields {
        name = string
        age = int
        gender = string 
    }
}
connector will generate data as the following: | name | age | gender | |---------------|-----|--------| | tyrantlucifer | 26 | male |
compress_codec [string]
The compress codec of files and the details that supported as the following shown:
- txt: lzonone
- json: lzonone
- csv: lzonone
- orc/parquet:
 automatically recognizes the compression type, no additional settings required.
schema [config]
fields [Config]
The schema of upstream data.
How to Create a Sftp Data Synchronization Jobs
The following example demonstrates how to create a data synchronization job that reads data from sftp and prints it on the local client:
# Set the basic configuration of the task to be performed
env {
  parallelism = 1
  job.mode = "BATCH"
}
# Create a source to connect to sftp
source {
  SftpFile {
    host = "sftp"
    port = 22
    user = seatunnel
    password = pass
    path = "tmp/seatunnel/read/json"
    file_format_type = "json"
    result_table_name = "sftp"
    schema = {
      fields {
        c_map = "map<string, string>"
        c_array = "array<int>"
        c_string = string
        c_boolean = boolean
        c_tinyint = tinyint
        c_smallint = smallint
        c_int = int
        c_bigint = bigint
        c_float = float
        c_double = double
        c_bytes = bytes
        c_date = date
        c_decimal = "decimal(38, 18)"
        c_timestamp = timestamp
        c_row = {
          C_MAP = "map<string, string>"
          C_ARRAY = "array<int>"
          C_STRING = string
          C_BOOLEAN = boolean
          C_TINYINT = tinyint
          C_SMALLINT = smallint
          C_INT = int
          C_BIGINT = bigint
          C_FLOAT = float
          C_DOUBLE = double
          C_BYTES = bytes
          C_DATE = date
          C_DECIMAL = "decimal(38, 18)"
          C_TIMESTAMP = timestamp
        }
      }
    }
  }
}
# Console printing of the read sftp data
sink {
  Console {
    parallelism = 1
  }
}