How to import documents into an elastic search index?
I have a bunch of json docs in my_docs.json
.
I want to dump all of these into my elasticsearch index, http://127.0.0.1:9200/{index}
.
I'm looking at this library https://github.com/taskrabbit/elasticsearch-dump but I can't figure out the right commands.
In order to dump, I tried: elasticdump --input=my_docs.json --output=http://127.0.0.1:9200/{index} type=data
and then to check if the index was updated: elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
But new_docs.json
is empty; I was expecting it to contain all the json in my_docs.json
How do I fix this?
node.js json elasticsearch
add a comment |
I have a bunch of json docs in my_docs.json
.
I want to dump all of these into my elasticsearch index, http://127.0.0.1:9200/{index}
.
I'm looking at this library https://github.com/taskrabbit/elasticsearch-dump but I can't figure out the right commands.
In order to dump, I tried: elasticdump --input=my_docs.json --output=http://127.0.0.1:9200/{index} type=data
and then to check if the index was updated: elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
But new_docs.json
is empty; I was expecting it to contain all the json in my_docs.json
How do I fix this?
node.js json elasticsearch
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29
add a comment |
I have a bunch of json docs in my_docs.json
.
I want to dump all of these into my elasticsearch index, http://127.0.0.1:9200/{index}
.
I'm looking at this library https://github.com/taskrabbit/elasticsearch-dump but I can't figure out the right commands.
In order to dump, I tried: elasticdump --input=my_docs.json --output=http://127.0.0.1:9200/{index} type=data
and then to check if the index was updated: elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
But new_docs.json
is empty; I was expecting it to contain all the json in my_docs.json
How do I fix this?
node.js json elasticsearch
I have a bunch of json docs in my_docs.json
.
I want to dump all of these into my elasticsearch index, http://127.0.0.1:9200/{index}
.
I'm looking at this library https://github.com/taskrabbit/elasticsearch-dump but I can't figure out the right commands.
In order to dump, I tried: elasticdump --input=my_docs.json --output=http://127.0.0.1:9200/{index} type=data
and then to check if the index was updated: elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
But new_docs.json
is empty; I was expecting it to contain all the json in my_docs.json
How do I fix this?
node.js json elasticsearch
node.js json elasticsearch
asked Nov 25 '18 at 15:14
nz_21nz_21
2031211
2031211
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29
add a comment |
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29
add a comment |
1 Answer
1
active
oldest
votes
You can use curl and bulk index api of elastic search to dump json data to index:
curl -H "Content-Type: application/json" -XPOST "localhost:9200/{index}/{type}/_bulk?pretty&refresh" --data-binary "@my_docs.json"
Note: The json file should have content in following format for the above to work:
{"index":{"_index":"my_index","_type":"_doc","_id":"1"}}
{"field1":"field 1 data 1","field2":11}
{"index":{"_index":"my_index","_type":"_doc","_id":"2"}}
{"field1":"field 1 data 2","field2":21}
Each document above is represent by two lines. First line indicates where to index and what is the document id. The next line is actual data of the doc.
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs withelasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?
– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53468885%2fhow-to-import-documents-into-an-elastic-search-index%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can use curl and bulk index api of elastic search to dump json data to index:
curl -H "Content-Type: application/json" -XPOST "localhost:9200/{index}/{type}/_bulk?pretty&refresh" --data-binary "@my_docs.json"
Note: The json file should have content in following format for the above to work:
{"index":{"_index":"my_index","_type":"_doc","_id":"1"}}
{"field1":"field 1 data 1","field2":11}
{"index":{"_index":"my_index","_type":"_doc","_id":"2"}}
{"field1":"field 1 data 2","field2":21}
Each document above is represent by two lines. First line indicates where to index and what is the document id. The next line is actual data of the doc.
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs withelasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?
– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
add a comment |
You can use curl and bulk index api of elastic search to dump json data to index:
curl -H "Content-Type: application/json" -XPOST "localhost:9200/{index}/{type}/_bulk?pretty&refresh" --data-binary "@my_docs.json"
Note: The json file should have content in following format for the above to work:
{"index":{"_index":"my_index","_type":"_doc","_id":"1"}}
{"field1":"field 1 data 1","field2":11}
{"index":{"_index":"my_index","_type":"_doc","_id":"2"}}
{"field1":"field 1 data 2","field2":21}
Each document above is represent by two lines. First line indicates where to index and what is the document id. The next line is actual data of the doc.
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs withelasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?
– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
add a comment |
You can use curl and bulk index api of elastic search to dump json data to index:
curl -H "Content-Type: application/json" -XPOST "localhost:9200/{index}/{type}/_bulk?pretty&refresh" --data-binary "@my_docs.json"
Note: The json file should have content in following format for the above to work:
{"index":{"_index":"my_index","_type":"_doc","_id":"1"}}
{"field1":"field 1 data 1","field2":11}
{"index":{"_index":"my_index","_type":"_doc","_id":"2"}}
{"field1":"field 1 data 2","field2":21}
Each document above is represent by two lines. First line indicates where to index and what is the document id. The next line is actual data of the doc.
You can use curl and bulk index api of elastic search to dump json data to index:
curl -H "Content-Type: application/json" -XPOST "localhost:9200/{index}/{type}/_bulk?pretty&refresh" --data-binary "@my_docs.json"
Note: The json file should have content in following format for the above to work:
{"index":{"_index":"my_index","_type":"_doc","_id":"1"}}
{"field1":"field 1 data 1","field2":11}
{"index":{"_index":"my_index","_type":"_doc","_id":"2"}}
{"field1":"field 1 data 2","field2":21}
Each document above is represent by two lines. First line indicates where to index and what is the document id. The next line is actual data of the doc.
edited Nov 25 '18 at 15:35
answered Nov 25 '18 at 15:30
Nishant SainiNishant Saini
2,1181019
2,1181019
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs withelasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?
– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
add a comment |
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs withelasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?
– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs with
elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?– nz_21
Nov 25 '18 at 15:45
thanks! So, I know how to insert documents into an index with java api. When I retrieve those docs with
elasticdump --output=new_docs.json input=http://127.0.0.1:9200/{index} type=data
, the output of new_docs.json doesn't conform to the format you specified. Each document on its own takes up one line (as opposed to two). Is there any way to retrieve the docs in the right format?– nz_21
Nov 25 '18 at 15:45
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
To my knowledge there might be plugins or third party tools that might do the job but I don't think elastic provides anything like that. Though you can use scroll api provided by elastic to get docs on a faster rate when dataset is large and use the result to write json in the required format.
– Nishant Saini
Nov 26 '18 at 0:48
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53468885%2fhow-to-import-documents-into-an-elastic-search-index%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Do you have access to the server to execute curl request?
– Nishant Saini
Nov 25 '18 at 15:22
@NishantSaini yup.
– nz_21
Nov 25 '18 at 15:29