Kafka Connect can't find connector
I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:
[2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='1.0.1', encodedVersion=1.0.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}
I've got a build for the plugin unzipped in a kafka subfolder, and have the following line in connect-standalone.properties:
plugin.path=/opt/kafka/plugins/kafka-connect-elasticsearch-5.0.1/src/main/java/io/confluent/connect/elasticsearch
I can see the various connectors inside that folder, but Kafka Connect does not load them; but it does load the standard connectors, like this:
[2018-11-21 14:56:28,258] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:136)
[2018-11-21 14:56:28,259] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
[2018-11-21 14:56:28,260] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
How can I properly register the connectors?
apache-kafka apache-kafka-connect confluent
add a comment |
I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:
[2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='1.0.1', encodedVersion=1.0.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}
I've got a build for the plugin unzipped in a kafka subfolder, and have the following line in connect-standalone.properties:
plugin.path=/opt/kafka/plugins/kafka-connect-elasticsearch-5.0.1/src/main/java/io/confluent/connect/elasticsearch
I can see the various connectors inside that folder, but Kafka Connect does not load them; but it does load the standard connectors, like this:
[2018-11-21 14:56:28,258] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:136)
[2018-11-21 14:56:28,259] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
[2018-11-21 14:56:28,260] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
How can I properly register the connectors?
apache-kafka apache-kafka-connect confluent
add a comment |
I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:
[2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='1.0.1', encodedVersion=1.0.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}
I've got a build for the plugin unzipped in a kafka subfolder, and have the following line in connect-standalone.properties:
plugin.path=/opt/kafka/plugins/kafka-connect-elasticsearch-5.0.1/src/main/java/io/confluent/connect/elasticsearch
I can see the various connectors inside that folder, but Kafka Connect does not load them; but it does load the standard connectors, like this:
[2018-11-21 14:56:28,258] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:136)
[2018-11-21 14:56:28,259] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
[2018-11-21 14:56:28,260] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
How can I properly register the connectors?
apache-kafka apache-kafka-connect confluent
I'm trying to use the Kafka Connect Elasticsearch connector, and am unsuccessful. It is crashing with the following error:
[2018-11-21 14:48:29,096] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:108)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches io.confluent.connect.elasticsearch.ElasticsearchSinkConnector , available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='1.0.1', encodedVersion=1.0.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='1.0.1', encodedVersion=1.0.1, type=source, typeName='source', location='classpath'}
I've got a build for the plugin unzipped in a kafka subfolder, and have the following line in connect-standalone.properties:
plugin.path=/opt/kafka/plugins/kafka-connect-elasticsearch-5.0.1/src/main/java/io/confluent/connect/elasticsearch
I can see the various connectors inside that folder, but Kafka Connect does not load them; but it does load the standard connectors, like this:
[2018-11-21 14:56:28,258] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:136)
[2018-11-21 14:56:28,259] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
[2018-11-21 14:56:28,260] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:335)
How can I properly register the connectors?
apache-kafka apache-kafka-connect confluent
apache-kafka apache-kafka-connect confluent
edited Nov 21 '18 at 14:37
cricket_007
79.5k1142109
79.5k1142109
asked Nov 21 '18 at 13:01
Boris K
9231027
9231027
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
The compiled JAR needs to be available to Kafka Connect. You have a few options here:
Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.
Build the JAR yourself. This typically involves:
cd kafka-connect-elasticsearch-5.0.1
mvn clean package
Then take the resulting
kafka-connect-elasticsearch-5.0.1.jar
JAR and put it in a path as configured in Kafka Connect withplugin.path
.
You can find more info on using Kafka Connect here:
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
- https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and wrote the above blog posts.
Building it myself results in this:[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line:plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.
– Boris K
Nov 21 '18 at 15:26
add a comment |
I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):
- download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -
plugin.path=/plugins
so tree looks something like this:
/plugins/
└── jdbcconnector
└──assets
└──doc
└──etc
└──lib
Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar
Now you can try to run connector
./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
connect-standalone.properties are common properties needed for kafka-connect, in my case:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/plugins
rest.port=8086
rest.host.name=127.0.0.1
jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
add a comment |
The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java
).
It also needs to be the parent directory of other directories which are containing those plug-ins.
plugin.path=/opt/kafka-connect/plugins/
Where
$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
file1.jar
file2.jar
etc
Ref - Manually installing Community Connectors
The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*
, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java
folder of the Confluent package installation in the plugin path as well
If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post
that matches a Confluent release.
An even simpler option would be to grab the package using Confluent Hub CLI
And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes atar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes
– cricket_007
Nov 21 '18 at 14:53
I'm looking at thetarget
folder, and it created akafka-connect-elasticsearch-5.0.1.jar
file. Notar.gz
that I can see.
– Boris K
Nov 21 '18 at 14:57
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53412622%2fkafka-connect-cant-find-connector%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
The compiled JAR needs to be available to Kafka Connect. You have a few options here:
Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.
Build the JAR yourself. This typically involves:
cd kafka-connect-elasticsearch-5.0.1
mvn clean package
Then take the resulting
kafka-connect-elasticsearch-5.0.1.jar
JAR and put it in a path as configured in Kafka Connect withplugin.path
.
You can find more info on using Kafka Connect here:
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
- https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and wrote the above blog posts.
Building it myself results in this:[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line:plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.
– Boris K
Nov 21 '18 at 15:26
add a comment |
The compiled JAR needs to be available to Kafka Connect. You have a few options here:
Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.
Build the JAR yourself. This typically involves:
cd kafka-connect-elasticsearch-5.0.1
mvn clean package
Then take the resulting
kafka-connect-elasticsearch-5.0.1.jar
JAR and put it in a path as configured in Kafka Connect withplugin.path
.
You can find more info on using Kafka Connect here:
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
- https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and wrote the above blog posts.
Building it myself results in this:[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line:plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.
– Boris K
Nov 21 '18 at 15:26
add a comment |
The compiled JAR needs to be available to Kafka Connect. You have a few options here:
Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.
Build the JAR yourself. This typically involves:
cd kafka-connect-elasticsearch-5.0.1
mvn clean package
Then take the resulting
kafka-connect-elasticsearch-5.0.1.jar
JAR and put it in a path as configured in Kafka Connect withplugin.path
.
You can find more info on using Kafka Connect here:
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
- https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and wrote the above blog posts.
The compiled JAR needs to be available to Kafka Connect. You have a few options here:
Use Confluent Platform, which includes the Elasticsearch (and others) pre-built: https://www.confluent.io/download/. There's zip, rpm/deb, Docker images etc available.
Build the JAR yourself. This typically involves:
cd kafka-connect-elasticsearch-5.0.1
mvn clean package
Then take the resulting
kafka-connect-elasticsearch-5.0.1.jar
JAR and put it in a path as configured in Kafka Connect withplugin.path
.
You can find more info on using Kafka Connect here:
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/
- https://www.confluent.io/blog/blogthe-simplest-useful-kafka-connect-data-pipeline-in-the-world-or-thereabouts-part-2/
- https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-3/
Disclaimer: I work for Confluent, and wrote the above blog posts.
answered Nov 21 '18 at 13:23
Robin Moffatt
6,1931229
6,1931229
Building it myself results in this:[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line:plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.
– Boris K
Nov 21 '18 at 15:26
add a comment |
Building it myself results in this:[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line:plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.
– Boris K
Nov 21 '18 at 15:26
Building it myself results in this:
[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
Building it myself results in this:
[ERROR] [ERROR] Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM for io.confluent:kafka-connect-elasticsearch:[unknown-version]: Could not transfer artifact io.confluent:common:pom:5.0.1 from/to confluent (${confluent.maven.repo}):
– Boris K
Nov 21 '18 at 13:26
1
1
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
¯_(ツ)_/¯ that's a good reason to use the pre-built version in Confluent Platform ;) It's open source, it's free to use. If you really don't want to, you can d/l it and simply extract the JAR and deploy it in your existing installation.
– Robin Moffatt
Nov 21 '18 at 13:29
OK, I've saved the pre-built version in
/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line: plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.– Boris K
Nov 21 '18 at 15:26
OK, I've saved the pre-built version in
/var/confluentinc-kafka-connect-elasticsearch-5.0.0/
. In my config, I have this line: plugin.path=/var/silverbolt/confluentinc-kafka-connect-elasticsearch-5.0.0/
. I'm still getting the same error about no matching connector class.– Boris K
Nov 21 '18 at 15:26
add a comment |
I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):
- download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -
plugin.path=/plugins
so tree looks something like this:
/plugins/
└── jdbcconnector
└──assets
└──doc
└──etc
└──lib
Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar
Now you can try to run connector
./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
connect-standalone.properties are common properties needed for kafka-connect, in my case:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/plugins
rest.port=8086
rest.host.name=127.0.0.1
jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
add a comment |
I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):
- download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -
plugin.path=/plugins
so tree looks something like this:
/plugins/
└── jdbcconnector
└──assets
└──doc
└──etc
└──lib
Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar
Now you can try to run connector
./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
connect-standalone.properties are common properties needed for kafka-connect, in my case:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/plugins
rest.port=8086
rest.host.name=127.0.0.1
jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
add a comment |
I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):
- download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -
plugin.path=/plugins
so tree looks something like this:
/plugins/
└── jdbcconnector
└──assets
└──doc
└──etc
└──lib
Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar
Now you can try to run connector
./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
connect-standalone.properties are common properties needed for kafka-connect, in my case:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/plugins
rest.port=8086
rest.host.name=127.0.0.1
jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html
I ran jdbc connector yesterday manually on kafka in docker without confluent platform etc just to learn how those things works underneath. I did not have to build jar on my side or anyhing like this. Hopefully it will be relevant for you - what I did is ( I will skip docker parts howto mount dir with connector etc ):
- download connector from https://www.confluent.io/connector/kafka-connect-jdbc/, unpack zip
put contents of zip to directory in path configured in properties file ( shown below in 3rd point ) -
plugin.path=/plugins
so tree looks something like this:
/plugins/
└── jdbcconnector
└──assets
└──doc
└──etc
└──lib
Note the lib dir where are the dependencies are, one of them is kafka-connect-jdbc-5.0.0.jar
Now you can try to run connector
./connect-standalone.sh connect-standalone.properties jdbc-connector-config.properties
connect-standalone.properties are common properties needed for kafka-connect, in my case:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/plugins
rest.port=8086
rest.host.name=127.0.0.1
jdbc-connector-config.properties is more involving, as it's just configuration for this particular connector, you need to dig into connector docs - for jdbc source it is https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html
edited Nov 22 '18 at 16:32
cricket_007
79.5k1142109
79.5k1142109
answered Nov 22 '18 at 12:06
freakman
3,22711536
3,22711536
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
add a comment |
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
For the JDBC Connect, I've noticed drivers can even be placed in a subfolder of that. E.g lib/drivers
– cricket_007
Nov 22 '18 at 16:31
add a comment |
The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java
).
It also needs to be the parent directory of other directories which are containing those plug-ins.
plugin.path=/opt/kafka-connect/plugins/
Where
$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
file1.jar
file2.jar
etc
Ref - Manually installing Community Connectors
The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*
, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java
folder of the Confluent package installation in the plugin path as well
If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post
that matches a Confluent release.
An even simpler option would be to grab the package using Confluent Hub CLI
And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes atar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes
– cricket_007
Nov 21 '18 at 14:53
I'm looking at thetarget
folder, and it created akafka-connect-elasticsearch-5.0.1.jar
file. Notar.gz
that I can see.
– Boris K
Nov 21 '18 at 14:57
add a comment |
The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java
).
It also needs to be the parent directory of other directories which are containing those plug-ins.
plugin.path=/opt/kafka-connect/plugins/
Where
$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
file1.jar
file2.jar
etc
Ref - Manually installing Community Connectors
The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*
, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java
folder of the Confluent package installation in the plugin path as well
If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post
that matches a Confluent release.
An even simpler option would be to grab the package using Confluent Hub CLI
And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes atar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes
– cricket_007
Nov 21 '18 at 14:53
I'm looking at thetarget
folder, and it created akafka-connect-elasticsearch-5.0.1.jar
file. Notar.gz
that I can see.
– Boris K
Nov 21 '18 at 14:57
add a comment |
The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java
).
It also needs to be the parent directory of other directories which are containing those plug-ins.
plugin.path=/opt/kafka-connect/plugins/
Where
$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
file1.jar
file2.jar
etc
Ref - Manually installing Community Connectors
The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*
, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java
folder of the Confluent package installation in the plugin path as well
If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post
that matches a Confluent release.
An even simpler option would be to grab the package using Confluent Hub CLI
And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that
The plugin path must load JAR files, containing compiled code, not raw Java classes of the source code (src/main/java
).
It also needs to be the parent directory of other directories which are containing those plug-ins.
plugin.path=/opt/kafka-connect/plugins/
Where
$ ls - lR /opt/kafka-connect/plugins/
kafka-connect-elasticsearch-x.y.z/
file1.jar
file2.jar
etc
Ref - Manually installing Community Connectors
The Kafka Connect startup scripts in the Confluent Platform automatically (used to?) read all folders that match share/java/kafka-connect-*
, too, so that's one way to go. At least, it will continue doing so, if you include the path to the share/java
folder of the Confluent package installation in the plugin path as well
If you are not very familiar with Maven, or even if you are, then you actually cannot just clone the Elasticsearch connector repo and build the master branch; it has prerequisites of first Kafka, then the common Confluent repo first. Otherwise, you must checkout a Git tag like 5.0.1-post
that matches a Confluent release.
An even simpler option would be to grab the package using Confluent Hub CLI
And if none of that works, just downloading the Confluent Platform and using the Kafka Connect scripts would be the most easiest. This does not imply you need to use the Kafka or Zookeeper configurations from that
edited Nov 22 '18 at 16:38
answered Nov 21 '18 at 14:32
cricket_007
79.5k1142109
79.5k1142109
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes atar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes
– cricket_007
Nov 21 '18 at 14:53
I'm looking at thetarget
folder, and it created akafka-connect-elasticsearch-5.0.1.jar
file. Notar.gz
that I can see.
– Boris K
Nov 21 '18 at 14:57
add a comment |
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes atar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes
– cricket_007
Nov 21 '18 at 14:53
I'm looking at thetarget
folder, and it created akafka-connect-elasticsearch-5.0.1.jar
file. Notar.gz
that I can see.
– Boris K
Nov 21 '18 at 14:57
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I built the JAR successfully, moved it into the a folder under /plugins/, added the path to the config, and am still getting the same "failed to find any class" error.
– Boris K
Nov 21 '18 at 14:50
I believe the Elastic connector actually makes a
tar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes– cricket_007
Nov 21 '18 at 14:53
I believe the Elastic connector actually makes a
tar.gz
file that you need to extract. It doesn't create just one JAR with all the needed classes– cricket_007
Nov 21 '18 at 14:53
I'm looking at the
target
folder, and it created a kafka-connect-elasticsearch-5.0.1.jar
file. No tar.gz
that I can see.– Boris K
Nov 21 '18 at 14:57
I'm looking at the
target
folder, and it created a kafka-connect-elasticsearch-5.0.1.jar
file. No tar.gz
that I can see.– Boris K
Nov 21 '18 at 14:57
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53412622%2fkafka-connect-cant-find-connector%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown