Skip to main content

Using Kafka Connect Connectors with Conduit

· 3 min read

The Conduit Kafka Connect Wrapper connector is a special connector that allows you to use Kafka Connect connectors with Conduit. Conduit doesn't come bundled with Kafka Connect connectors, but you can use it to bring any Kafka Connect connector with Conduit.

This connector gives you the ability to:

  • Easily migrate from Kafka Connect to Conduit.
  • Remove Kafka as a dependency to move data between data infrastructure.
  • Leverage a datastore if Conduit doesn't have a native connector.

Since the Conduit Kafka Connect Wrapper itself is written in Java, but most of Conduit's connectors are written in Go, it also serves as a good example of the flexbilty of the Conduit Plugin SDK.

Let's begin.

How it works

To use the Kafka Connect wrapper connector, you'll need to:

  1. Clone the conduit-kafka-connect-wrapper repository.
  2. Build the Connector JAR.
  3. Download Kafka Connect JARs and any dependencies you would like to add.
  4. Create a Conduit pipeline.
  5. Add the Connector to pipeline.

Setup

To begin, clone the connector:

git clone git@github.com:ConduitIO/conduit-kafka-connect-wrapper.git

Then, we need to build the connector. The Kafka Connect wrapper connector is written in Java so it needs to be compiled.

cd conduit-kafka-connect-wrapper

./scripts/dist.sh

This script will:

  1. Create a directory called dist.
  2. Compile the connector and place the JAR in dist.
  3. Create dist/libs directory.

Now that we have everything setup, we can add the Kafka Connect connectors.

Adding Kafka Connect Connector

The libs directory is where you put the Kafka Connect connector JARs and their dependencies (if any). The wrapper plugin will automatically load connectors and all the other dependencies from a libs directory.

For this example, we will use the PostgreSQL Kafka Connect JDBC Connector. To install, add the following:

This connector allows you to connect to any JDBC database.

Adding Connector to Pipeline

Now that the Kafka Connect connectors included in lib, we can add this connector to a pipeline.

  1. Start Conduit.
  2. Create a pipeline:
curl -X 'POST' \
'http://localhost:8080/v1/pipelines' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"config": {
"name": "my-pipeline",
"description": "This pipeline is for testing",
}
}'
  1. Add connector to your pipeline:
curl -X 'POST' \
'http://localhost:8080/v1/connectors' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"type": "TYPE_SOURCE",
"plugin": "/path/to/conduit-kafka-connect-wrapper/dist/conduit-kafka-connect-wrapper",
"pipeline_id": "f24c4a70-8664-4f80-9c27-204825442943",
"config": {
"name": "my-pg-source",
"settings": {
"wrapper.connector.class": "io.aiven.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:postgresql://localhost/conduit-test-db",
"connection.user": "username",
"connection.password": "password",
"incrementing.column.name": "id",
"mode": "incrementing",
"tables": "customers",
"topic.prefix": "my_topic_prefix"
}
}
}'

When creating a Conduit connector, the plugin you need to use is the path to where conduit-kafka-connect-wrapper is on your system.

Note that the wrapper.connector.class should be a class which is present on the classpath, i.e. in one of the JARs in the libs directory.

For more information, check the Wrapper Configuration section.

What's next?

Using the Kafka Connect Wrapper you can now migrate to Conduit with less friction.

Here are some resource to help you learn more:

We can't wait to see what you build 🚀. If you have any questions or feedback: