flume-ng整合hbase.pdf


立即下载 不见你
2024-04-20
机器 memory 数据 de1 flume 配置 agent.sources master avro kafka
191.2 KB

Flume-ng+hbase整合
环境准备
Hadoop+hbase+zookeeper+flume-ng
配置介绍
以master机器作为 flume数据的源、并将数据发送给 node1机器上的 flume,最后 node1
机器上的 flume将数据插入到 Hbase中。
<一>Master机器上的 example.conf配置
在master的$FLUME_HOME/conf/目录下创建以下文件(文件名随便取),并做如下配置,
这是数据的发送端:
agent.sources =baksrc
agent.channels=memoryChannel
agent.sinks =remotesink
agent.sources.baksrc.type = exec
agent.sources.baksrc.command = tail -F /home/test/data/data.txt
agent.sources.baksrc.checkperiodic = 1000
agent.sources.baksrc.channels=memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.keep-alive = 30
agent.channels.memoryChannel.capacity = 10000
agent.channels.memoryChannel.transactionCapacity = 10000
agent.sinks.remotesink.type = avro
agent.sinks.remotesink.hostname =node1
agent.sinks.remotesink.port = 8888
agent.sinks.remotesink.channel= memoryChannel
<二>node1机器上的 example.conf配置
agent.sources = avrosrc
agent.channels = memoryChannel
agent.sinks = fileSink
agent.sources.avrosrc.type = avro


机器/memory/数据/de1/flume/配置/agent.sources/master/avro/kafka/ 机器/memory/数据/de1/flume/配置/agent.sources/master/avro/kafka/
-1 条回复
登录 后才能参与评论
-->