Sls
Sls source connector
Support Those Enginesâ
Spark
Flink
Seatunnel Zeta
Key Featuresâ
Descriptionâ
Source connector for Aliyun Sls.
Supported DataSource Infoâ
In order to use the Sls connector, the following dependencies are required. They can be downloaded via install-plugin.sh or from the Maven central repository.
Datasource | Supported Versions | Maven |
---|---|---|
Sls | Universal | Download |
Source Optionsâ
Name | Type | Required | Default | Description |
---|---|---|---|---|
project | String | Yes | - | Aliyun Sls Project |
logstore | String | Yes | - | Aliyun Sls Logstore |
endpoint | String | Yes | - | Aliyun Access Endpoint |
access_key_id | String | Yes | - | Aliyun AccessKey ID |
access_key_secret | String | Yes | - | Aliyun AccessKey Secret |
start_mode | StartMode[earliest],[group_cursor],[latest] | No | group_cursor | The initial consumption pattern of consumers. |
consumer_group | String | No | SeaTunnel-Consumer-Group | Sls consumer group id, used to distinguish different consumer groups. |
auto_cursor_reset | CursorMode[begin],[end] | No | end | When there is no cursor in the consumer group, cursor initialization occurs |
batch_size | Int | No | 1000 | The amount of data pulled from SLS each time |
partition-discovery.interval-millis | Long | No | -1 | The interval for dynamically discovering topics and partitions. |
Task Exampleâ
Simpleâ
This example reads the data of sls's logstore1 and prints it to the client.And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in Install SeaTunnel to install and deploy SeaTunnel. And if you have not yet installed and deployed SeaTunnel, you need to follow the instructions in Install SeaTunnel to install and deploy SeaTunnel. And then follow the instructions in Quick Start With SeaTunnel Engine to run this job.
Create RAM user and authorization,Please ensure thr ram user have sufficient rights to perform, reference RAM Custom Authorization Example
# Defining the runtime environment
env {
parallelism = 2
job.mode = "STREAMING"
checkpoint.interval = 30000
}
source {
Sls {
endpoint = "cn-hangzhou-intranet.log.aliyuncs.com"
project = "project1"
logstore = "logstore1"
access_key_id = "xxxxxxxxxxxxxxxxxxxxxxxx"
access_key_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
schema = {
fields = {
id = "int"
name = "string"
description = "string"
weight = "string"
}
}
}
}
sink {
Console {
}
}