所以,我确实找到了一个解决方案 所有者 麋鹿图像回购。
我按照本页的说明操作。也就是说,我通过运行进入容器bash 的 docker exec -it <container-name> bash 强> ,然后(在容器终端内)我运行命令 的 /opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }' 强> 。
docker exec -it <container-name> bash
/opt/logstash/bin/logstash --path.data /tmp/logstash/data -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
问题是,虽然 Logstash 服务已经启动,它没有一个交互式终端。上面的命令解决了这个问题。
Logstash
容器终端内显示以下日志:
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties [2018-08-12T06:28:28,941][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"} [2018-08-12T06:28:28,948][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"} [2018-08-12T06:28:29,592][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-08-12T06:28:29,656][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"29cb946b-2bed-4390-b0cb-9aad6ef5a2a2", :path=>"/tmp/logstash/data/uuid"} [2018-08-12T06:28:30,634][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"} [2018-08-12T06:28:32,911][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-08-12T06:28:33,646][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2018-08-12T06:28:33,663][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"} [2018-08-12T06:28:34,107][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"} [2018-08-12T06:28:34,205][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} [2018-08-12T06:28:34,212][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [2018-08-12T06:28:34,268][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]} [2018-08-12T06:28:34,364][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2018-08-12T06:28:34,442][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2018-08-12T06:28:34,496][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5dcf75c7 run>"} [2018-08-12T06:28:34,602][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash The stdin plugin is now waiting for input: [2018-08-12T06:28:34,727][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-08-12T06:28:35,607][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
以及我服务器终端内的以下内容:
elk_1 | ==> /var/log/elasticsearch/elasticsearch.log <== elk_1 | [2018-08-12T06:28:34,777][INFO ][o.e.c.m.MetaDataIndexTemplateService] [jqTz2zS] adding template [logstash] for index patterns [logstash-*] elk_1 | [2018-08-12T06:28:35,214][INFO ][o.e.c.m.MetaDataCreateIndexService] [jqTz2zS] [logstash-2018.08.12] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_] elk_1 | [2018-08-12T06:28:36,207][INFO ][o.e.c.m.MetaDataMappingService] [jqTz2zS] [logstash-2018.08.12/hiLssj14TMKd5lzBq6tvrw] create_mapping [doc]
这样做,一个 index pattern 确实是在Kibana里面创建的,我开始在里面收到消息 discover 标签。
index pattern
discover