Sink Common Options
Common parameters of sink connectors
warn
The old configuration name source_table_name is deprecated, please migrate to the new name plugin_input as soon as possible.
| Name | Type | Required | Default | Description |
|---|---|---|---|---|
| plugin_input | String | No | - | When plugin_input is not specified, the current plug-in processes the data set dataset output by the previous plugin in the configuration file When plugin_input is specified, the current plug-in is processing the data set corresponding to this parameter. |
| datasource_id | String | No | - | The data source ID for retrieving connection configuration from DataSource Center. When specified, the connector will fetch connection details (e.g., URL, username, password) from the external metadata service instead of using direct configuration. See DataSource SPI for more information. |
Important note
When the job configuration plugin_input you must set the plugin_output parameter
Task Example
Simple
This is the process of passing a data source through two transforms and returning two different pipiles to different sinks
source {
FakeSourceStream {
parallelism = 2
plugin_output = "fake"
field_name = "name,age"
}
}
transform {
Filter {
plugin_input = "fake"
fields = [name]
plugin_output = "fake_name"
}
Filter {
plugin_input = "fake"
fields = [age]
plugin_output = "fake_age"
}
}
sink {
Console {
plugin_input = "fake_name"
}
Console {
plugin_input = "fake_age"
}
}
If the job only have one source and one(or zero) transform and one sink, You do not need to specify
plugin_inputandplugin_outputfor connector. If the number of any operator in source, transform and sink is greater than 1, you must specify theplugin_inputandplugin_outputfor each connector in the job.