@ThreadSafe public class RecordsStreamProducer extends RecordsProducer
RecordsProducer which creates records from a Postgres
streaming replication connection and messages.| Modifier and Type | Field and Description |
|---|---|
private static String |
CONTEXT_NAME |
private ExecutorService |
executorService |
private ReplicationConnection |
replicationConnection |
private AtomicReference<ReplicationStream> |
replicationStream |
logger, sourceInfo, taskContext| Constructor and Description |
|---|
RecordsStreamProducer(PostgresTaskContext taskContext,
SourceInfo sourceInfo)
Creates new producer instance for the given task context
|
| Modifier and Type | Method and Description |
|---|---|
private Object[] |
columnValues(List<PgProto.DatumMessage> messageList,
TableId tableId,
boolean refreshSchemaIfChanged) |
protected void |
commit()
Notification that offsets have been committed to Kafka.
|
protected Object |
extractValueFromMessage(PgProto.DatumMessage datumMessage)
Converts the Protobuf value for a
plugin message to
a Java value based on the type of the column from the message. |
protected void |
generateCreateRecord(TableId tableId,
Object[] rowData,
Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) |
protected void |
generateDeleteRecord(TableId tableId,
Object[] oldRowData,
Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) |
protected void |
generateUpdateRecord(TableId tableId,
Object[] oldRowData,
Object[] newRowData,
Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer) |
private void |
process(PgProto.RowMessage message,
Long lsn,
Consumer<org.apache.kafka.connect.source.SourceRecord> consumer) |
private boolean |
schemaChanged(List<PgProto.DatumMessage> messageList,
Table table) |
protected void |
start(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
Starts up this producer.
|
protected void |
stop()
Requests that this producer be stopped.
|
private void |
streamChanges(Consumer<org.apache.kafka.connect.source.SourceRecord> consumer) |
private TableSchema |
tableSchemaFor(TableId tableId) |
clock, createEnvelope, schema, topicSelectorprivate static final String CONTEXT_NAME
private final ExecutorService executorService
private final ReplicationConnection replicationConnection
private final AtomicReference<ReplicationStream> replicationStream
public RecordsStreamProducer(PostgresTaskContext taskContext, SourceInfo sourceInfo)
taskContext - a PostgresTaskContext, never nullsourceInfo - a SourceInfo instance to track stored offsetsprotected void start(Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
RecordsProducerPostgresConnectorTask instance. Subclasses should start
enqueuing records via a separate thread at the end of this method.start in class RecordsProducerrecordConsumer - a consumer of SourceRecord instances, may not be nullprivate void streamChanges(Consumer<org.apache.kafka.connect.source.SourceRecord> consumer)
protected void commit()
RecordsProducercommit in class RecordsProducerprotected void stop()
RecordsProducerPostgresConnectorTask instancestop in class RecordsProducerprivate void process(PgProto.RowMessage message, Long lsn, Consumer<org.apache.kafka.connect.source.SourceRecord> consumer) throws SQLException
SQLExceptionprotected void generateCreateRecord(TableId tableId, Object[] rowData, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
protected void generateUpdateRecord(TableId tableId, Object[] oldRowData, Object[] newRowData, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
protected void generateDeleteRecord(TableId tableId, Object[] oldRowData, Consumer<org.apache.kafka.connect.source.SourceRecord> recordConsumer)
private Object[] columnValues(List<PgProto.DatumMessage> messageList, TableId tableId, boolean refreshSchemaIfChanged) throws SQLException
SQLExceptionprivate boolean schemaChanged(List<PgProto.DatumMessage> messageList, Table table)
private TableSchema tableSchemaFor(TableId tableId) throws SQLException
SQLExceptionprotected Object extractValueFromMessage(PgProto.DatumMessage datumMessage)
plugin message to
a Java value based on the type of the column from the message. This value will be converted later on if necessary by the
PostgresValueConverter.converter(Column, Field) instance to match whatever the Connect schema type expects.
Note that the logic here is tightly coupled (i.e. dependent) on the Postgres plugin logic which writes the actual
Protobuf messages.datumMessage - a PgProto.DatumMessage instance; never nullCopyright © 2017 JBoss by Red Hat. All rights reserved.