项目作者: miku

项目描述 :
Bulk indexing command line tool for elasticsearch
高级语言: Go
项目地址: git://github.com/miku/esbulk.git
创建时间: 2014-08-26T20:50:22Z
项目社区:https://github.com/miku/esbulk

开源协议:GNU General Public License v3.0

下载


esbulk

Fast parallel command line bulk loading utility for elasticsearch. Data is read from a
newline delimited JSON file or stdin and indexed into elasticsearch in bulk
and in parallel. The shortest command would be:

  1. $ esbulk -index my-index-name < file.ldj

Caveat: If indexing pressure on the bulk API is too high (dozens or hundreds of
parallel workers, large batch sizes, depending on you setup), esbulk will halt
and report an error:

  1. $ esbulk -index my-index-name -w 100 file.ldj
  2. 2017/01/02 16:25:25 error during bulk operation, try less workers (lower -w value) or
  3. increase thread_pool.bulk.queue_size in your nodes

Please note that, in such a case, some documents are indexed and some are not.
Your index will be in an inconsistent state, since there is no transactional
bracket around the indexing process.

However, using defaults (parallelism: number of cores) on a single node setup
will just work. For larger clusters, increase the number of workers until you
see full CPU utilization. After that, more workers won’t buy any more speed.

Currently, esbulk is tested against elasticsearch
versions 2, 5, 6, 7 and 8 using
testcontainers. Originally written for Leipzig University
Library
, project
finc
.

Project Status: Active – The project has reached a stable, usable state and is being actively developed.
GitHub All Releases

Installation

  1. $ go install github.com/miku/esbulk/cmd/esbulk@latest

For deb or rpm packages, see: https://github.com/miku/esbulk/releases

Usage

  1. $ esbulk -h
  2. Usage of esbulk:
  3. -0 set the number of replicas to 0 during indexing
  4. -c string
  5. create index mappings, settings, aliases, https://is.gd/3zszeu
  6. -cpuprofile string
  7. write cpu profile to file
  8. -id string
  9. name of field to use as id field, by default ids are autogenerated
  10. -index string
  11. index name
  12. -mapping string
  13. mapping string or filename to apply before indexing
  14. -memprofile string
  15. write heap profile to file
  16. -optype string
  17. optype (index - will replace existing data,
  18. create - will only create a new doc,
  19. update - create new or update existing data)
  20. (default "index")
  21. -p string
  22. pipeline to use to preprocess documents
  23. -purge
  24. purge any existing index before indexing
  25. -purge-pause duration
  26. pause after purge (default 1s)
  27. -r string
  28. Refresh interval after import (default "1s")
  29. -server value
  30. elasticsearch server, this works with https as well
  31. -size int
  32. bulk batch size (default 1000)
  33. -skipbroken
  34. skip broken json
  35. -type string
  36. elasticsearch doc type (deprecated since ES7)
  37. -u string
  38. http basic auth username:password, like curl -u
  39. -v prints current program version
  40. -verbose
  41. output basic progress
  42. -w int
  43. number of workers to use (default 8)
  44. -z unzip gz'd file on the fly

To index a JSON file, that contains one document
per line, just run:

  1. $ esbulk -index example file.ldj

Where file.ldj is line delimited JSON, like:

  1. {"name": "esbulk", "version": "0.2.4"}
  2. {"name": "estab", "version": "0.1.3"}
  3. ...

By default esbulk will use as many parallel
workers, as there are cores. To tweak the indexing
process, adjust the -size and -w parameters.

You can index from gzipped files as well, using
the -z flag:

  1. $ esbulk -z -index example file.ldj.gz

Starting with 0.3.7 the preferred method to set a
non-default server hostport is via -server, e.g.

  1. $ esbulk -server https://0.0.0.0:9201

This way, you can use https as well, which was not
possible before. Options -host and -port are
gone as of esbulk 0.5.0.

Reusing IDs

Since version 0.3.8: If you want to reuse IDs from your documents in elasticsearch, you
can specify the ID field via -id flag:

  1. $ cat file.json
  2. {"x": "doc-1", "db": "mysql"}
  3. {"x": "doc-2", "db": "mongo"}

Here, we would like to reuse the ID from field x.

  1. $ esbulk -id x -index throwaway -verbose file.json
  2. ...
  3. $ curl -s http://localhost:9200/throwaway/_search | jq
  4. {
  5. "took": 2,
  6. "timed_out": false,
  7. "_shards": {
  8. "total": 5,
  9. "successful": 5,
  10. "failed": 0
  11. },
  12. "hits": {
  13. "total": 2,
  14. "max_score": 1,
  15. "hits": [
  16. {
  17. "_index": "throwaway",
  18. "_type": "default",
  19. "_id": "doc-2",
  20. "_score": 1,
  21. "_source": {
  22. "x": "doc-2",
  23. "db": "mongo"
  24. }
  25. },
  26. {
  27. "_index": "throwaway",
  28. "_type": "default",
  29. "_id": "doc-1",
  30. "_score": 1,
  31. "_source": {
  32. "x": "doc-1",
  33. "db": "mysql"
  34. }
  35. }
  36. ]
  37. }
  38. }

Nested ID fields

Version 0.4.3 adds support for nested ID fields:

  1. $ cat fixtures/pr-8-1.json
  2. {"a": {"b": 1}}
  3. {"a": {"b": 2}}
  4. {"a": {"b": 3}}
  5. $ esbulk -index throwaway -id a.b < fixtures/pr-8-1.json
  6. ...

Concatenated ID

Version 0.4.3 adds support for IDs that are the concatenation of multiple fields:

  1. $ cat fixtures/pr-8-2.json
  2. {"a": {"b": 1}, "c": "a"}
  3. {"a": {"b": 2}, "c": "b"}
  4. {"a": {"b": 3}, "c": "c"}
  5. $ esbulk -index throwaway -id a.b,c < fixtures/pr-8-1.json
  6. ...
  7. {
  8. "_index": "xxx",
  9. "_type": "default",
  10. "_id": "1a",
  11. "_score": 1,
  12. "_source": {
  13. "a": {
  14. "b": 1
  15. },
  16. "c": "a"
  17. }
  18. },

Using X-Pack

Since 0.4.2: support for secured elasticsearch nodes:

  1. $ esbulk -u elastic:changeme -index myindex file.ldj

A similar project has been started for solr, called solrbulk.

Contributors

and others.

Measurements

  1. $ csvlook -I measurements.csv
  2. | es | esbulk | docs | avg_b | nodes | cores | total_heap_gb | t_s | docs_per_s | repl |
  3. |-------|--------|-----------|-------|-------|-------|---------------|-------|------------|------|
  4. | 6.1.2 | 0.4.8 | 138000000 | 2000 | 1 | 32 | 64 | 6420 | 22100 | 1 |
  5. | 6.1.2 | 0.4.8 | 138000000 | 2000 | 1 | 8 | 30 | 27360 | 5100 | 1 |
  6. | 6.1.2 | 0.4.8 | 1000000 | 2000 | 1 | 4 | 1 | 300 | 3300 | 1 |
  7. | 6.1.2 | 0.4.8 | 10000000 | 26 | 1 | 4 | 8 | 122 | 81000 | 1 |
  8. | 6.1.2 | 0.4.8 | 10000000 | 26 | 1 | 32 | 64 | 32 | 307000 | 1 |
  9. | 6.2.3 | 0.4.10 | 142944530 | 2000 | 2 | 64 | 128 | 26253 | 5444 | 1 |
  10. | 6.2.3 | 0.4.10 | 142944530 | 2000 | 2 | 64 | 128 | 11113 | 12831 | 0 |
  11. | 6.2.3 | 0.4.13 | 15000000 | 6000 | 2 | 64 | 128 | 2460 | 6400 | 0 |

Why not add a row?