1ELASTICDUMP(1)              General Commands Manual             ELASTICDUMP(1)
2
3
4

NAME

6       elasticdump - Import and export tools for elasticsearch
7

SYNOPSIS

9       elasticdump --input SOURCE --output DESTINATION [OPTIONS]
10

DESCRIPTION

12       --input
13              Source location (required)
14
15       --output
16              Destination location (required)
17
18       --limit
19              How many objects to move in bulk per operation (default: 100)
20
21       --debug
22              Display the elasticsearch commands being used (default: false)
23
24       --type What  are  we  exporting?   (default: data, options: [data, map‐
25              ping])
26
27       --delete
28              Delete documents one-by-one from the input as  they  are  moved.
29              Will not delete the source index
30
31              (default: false)
32
33       --searchBody
34              Preform  a  partial  extract based on search results (when ES is
35              the input, default: '{"query": { "match_all": {} } }')
36
37       --all  Load/store documents from ALL indexes (default: false)
38
39       --bulk Leverage elasticsearch Bulk API when writing documents (default:
40              false)
41
42       --ignore-errors
43              Will  continue  the  read/write  loop  on  write error (default:
44              false)
45
46       --scrollTime
47              Time  the  nodes  will  hold  the  requested  search  in  order.
48              (default: 10m)
49
50       --maxSockets
51              How  many  simultaneous  HTTP  requests  can  we  process  make?
52              (default: 5 [node <= v0.10.x] / Infinity [node >= v0.11.x] )
53
54       --bulk-use-output-index-name
55              Force use of destination index name (the actual output  URL)  as
56              destination while bulk writing to ES. Allows leveraging Bulk API
57              copying data inside the same elasticsearch instance.   (default:
58              false)
59
60       --timeout
61              Integer  containing  the  number  of  milliseconds to wait for a
62              request to respond before aborting the request. Passed  directly
63              to  the request library. If used in bulk writing, it will result
64              in the entire batch not being written.   Mostly  used  when  you
65              don't  care  too  much  if you lose some data when importing but
66              rather have speed.
67
68       --skip Integer containing the number of rows you  wish  to  skip  ahead
69              from  the input transport.  When importing a large index, things
70              can go wrong, be it connectivity, crashes, someone forgetting to
71              `screen`, etc.  This allows you to start the dump again from the
72              last known line written (as logged by the `offset` in  the  out‐
73              put).  Please be advised that since no sorting is specified when
74              the dump is initially created, there's no real way to  guarantee
75              that the skipped rows have already been written/parsed.  This is
76              more of an option for when you want to get most data as possible
77              in  the  index  without  concern  for  losing  some  rows in the
78              process, gsimilar to the `timeout` option.
79
80       --inputTransport
81              Provide a custom js file to us as the input transport
82
83       --outputTransport
84              Provide a custom js file to us as the output transport
85
86       --help This page
87

EXAMPLES

89       Copy an index from production to staging with mappings:
90
91       elasticdump \
92              --input=http://production.es.com:9200/my_index \
93              --output=http://staging.es.com:9200/my_index \
94              --type=mapping
95
96       elasticdump \
97              --input=http://production.es.com:9200/my_index \
98              --output=http://staging.es.com:9200/my_index \
99              --type=data
100
101       Backup index data to a file:
102
103       elasticdump \
104              --input=http://production.es.com:9200/my_index \
105              --output=/data/my_index_mapping.json \
106              --type=mapping
107
108       elasticdump \
109              --input=http://production.es.com:9200/my_index \
110              --output=/data/my_index.json \
111              --type=data
112
113       Backup and index to a gzip using stdout:
114
115       elasticdump \
116              --input=http://production.es.com:9200/my_index \
117              --output=$ \
118              | gzip > /data/my_index.json.gz
119
120       Backup ALL indices, then use Bulk API to populate another ES cluster:
121
122       elasticdump \
123              --all=true \
124              --input=http://production-a.es.com:9200/ \
125              --output=/data/production.json
126
127       elasticdump \
128              --bulk=true \
129              --input=/data/production.json \
130              --output=http://production-b.es.com:9200/
131
132       Backup the results of a query to a file:
133
134       elasticdump \
135              --input=http://production.es.com:9200/my_index \
136              --output=query.json \
137              --searchBody '{"query":{"term":{"username": "admin"}}}'
138

SEE ALSO

140       https://github.com/taskrabbit/elasticsearch-dump
141
142
143
144                                                                ELASTICDUMP(1)
Impressum