Flume与Kafka整合--扇入、扇出功能整合,其中扇出包括:复制流、复用流
微信搜索关注:若泽大数据,有精美课程和定时推送技术博客,qq交流群:707635769,一起交流,一起进步
一、概念
1、Flume:Cloudera 开发的分布式日志收集系统,是一种分布式,可靠且可用的服务,用于高效地收集,汇总和移动大量日志数据。 它具有基于流式数据流的简单而灵活的架构。它具有可靠的可靠性机制和许多故障转移和恢复机制,具有强大的容错性和容错能力。它使用一个简单的可扩展数据模型,允许在线分析应用程序。Flume分为OG、NG版本,其中Flume OG 的最后一个发行版本 0.94.0,之后为NG版本。
2、Kafka:作为一个集群运行在一台或多台可以跨越多个数据中心的服务器上。在Kafka中,客户端和服务器之间的通信是通过一种简单的,高性能的,语言不可知的TCP协议完成的。协议是版本控制的,并保持与旧版本的向后兼容性。Kafka提供Java客户端,但客户端可以使用多种语言。
3、Kafka通常用于两大类应用,如下:
A、构建可在系统或应用程序之间可靠获取数据的实时流数据管道
B、构建实时流应用程序,用于转换或响应数据流
C、Kafka每个记录由一个键,一个值和一个时间戳组成。
二、产述背景
基于大数据领域实现日志数据时时采集及数据传递等需要,据此需求下试着完成flume+kafka扇入、扇出功能整合,其中扇出包括:复制流、复用流等功能性测试。后续根据实际需要,将完善kafka与spark streaming进行整合整理工作。
注:此文档仅限于功能性测试,性能优化方面请大家根据实际情况增加。
三、部署安装
1、测试环境说明:
操作系统:CentOS 7
Flume版本:flume-ng-1.6.0-cdh5.7.0
Kafka版本:kafka_2.11-0.10.0.1
JDK版本:JDK1.8.0
Scala版本:2.11.8
2、测试步骤:
2.1、flume部署
2.1.1、下载安装介质,并解压:
[*]cd /software
[*]wget http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.7.0.tar.gz
[*]tar -zxvf flume-ng-1.6.0-cdh5.7.0.tar.gz -C /app/
2.1.2、配置flume工作
[*]cd /app/apache-flume-1.6.0-cdh5.7.0-bin/conf
[*]cp flume-env.sh.template flume-env.sh
[*]vi flume-env.sh 【增加Java Home路径】
[*]export JAVA_HOME=/usr/java/jdk1.8.0_151
退出保存:(:wq!)
[*]vi ~/.bash_profile
[*]export FLUME_HOME=/app/apache-flume-1.6.0-cdh5.7.0-bin
[*]export PATH=$FLUME_HOME/bin:$PATH
退出保存:(:wq!)==>生效:source ~/.bash_profile
2.1.3、验证flume安装,编写配置文件
新建example.conf文件,注此配置输入源为netcat,通道为memory,输出为logger
viexample.conf
[*]# example.conf: A single-node Flume configuration
[*]
[*]# Name the components on this agent
[*]a1.sources = r1
[*]a1.sinks = k1
[*]a1.channels = c1
[*]
[*]# Describe/configure the source
[*]a1.sources.r1.type = netcat
[*]a1.sources.r1.bind = 192.168.137.130
[*]a1.sources.r1.port = 44444
[*]
[*]# Describe the sink
[*]a1.sinks.k1.type = logger
[*]a1.sinks.k1.maxBytesToLog = 10
[*]
[*]# Use a channel which buffers events in memory
[*]a1.channels.c1.type = memory
[*]
[*]# Bind the source and sink to the channel
[*]a1.sources.r1.channels = c1
[*]a1.sinks.k1.channel = c1
退出保存:(:wq!)
2.1.4、测试验证
启动flume监听进程:flume-ng agent --name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/example.conf -Dflume.root.logger=INFO,console
如图信息为正常启动:http://blog.itpub.net/attachment/201803/31/31511218_1522466015443G.png
使用telnet测试:telnet 192.168.137.130 44444,在出现的输入中输入任何信息。详见:
http://blog.itpub.net/attachment/201803/31/31511218_1522466056PZHZ.jpg
然后查看flume监控窗口返回信息,详见:
http://blog.itpub.net/attachment/201803/31/31511218_1522466080lHaA.png
至此此flume部署与测试完成
2.2、kafka部署
2.2.1、下载安装介质,并解压:
[*]cd /software
[*]wget https://archive.apache.org/dist/ ... a_2.11-0.10.0.1.tgz
[*]tar -zxvf kafka_2.11-0.10.0.1.tgz -C /app/
2.2.2、配置kafka工作
注:1、因Kafka把它的meta数据都存储在ZK上,所以说ZK是他的必要存在没有ZK没法运行Kafka;在老版本(0.8.1以前)里面消费段(consumer)也是依赖ZK的,在新版本中移除了客户端对ZK的依赖,但是broker依然依赖于ZK。所以必须在kafka配置前部署完成ZK,此文不对ZK进行演示,请另行参数ZK部署文档。
2、在搭建kafka集群时,注意对等性部署,否则会存在问题
[*]cd /app/kafka_2.11-0.10.0.1/config
[*]vi server.properties
[*]#broker的ID,在集群中需要唯一
[*]broker.id=1
[*]
[*]#Socket Server端号
[*]port=9082
[*]
[*]#Socket Server服务IP地址
[*]host.name=192.168.137.132
[*]
[*]#kafka日志文件存储
[*]log.dirs=/app/kafka_2.11-0.10.0.1/kafka-logs
[*]
[*]#连接ZK存放kafka元数据位置
[*]zookeeper.connect=192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka
退出保存:(:wq!)
#创建日志文件存放目录
[*]mkdir -p /app/kafka_2.11-0.10.0.1/kafka-logs
#增加kafka环境变量
[*]vi ~/.bash_profile
[*]export KAFKA_HOME=/app/kafka_2.11-0.10.0.1
[*]export PATH=$KAFKA_HOME/bin:$PATH
退出保存:(:wq!)
生效环境变更:source ~/.bash_profile
2.2.3、将kafka复制到其它两台服务器中
[*]cd /app
[*]scp -r kafka_2.11-0.10.0.1 192.168.137.133:/app/
[*]scp -r kafka_2.11-0.10.0.1 192.168.137.134:/app/
[*]注:进入192.168.137.133与134服务器,将对应的broker.id、port、host.name改为当前服务器在集群中对应的值
2.2.4、验证kafka安装
[*]##创建topic
[*]kafka-topics.sh --create \
[*]--zookeeper 192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka \
[*]--replication-factor 3 --partitions 3 --topic test
[*]##启动数据发送者
[*]kafka-console-producer.sh \
[*]--broker-list 192.168.137.132:9082,192.168.137.133:9082,192.168.137.134:9082 --topic test
[*]##启动数据消费者
[*]kafka-console-consumer.sh \
[*]--zookeeper 192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka \
[*]--from-beginning --topic test
验证见图:
http://blog.itpub.net/attachment/201803/31/31511218_1522466489x274.png
http://blog.itpub.net/attachment/201803/31/31511218_1522502586sGyt.png
至此kafka部署与测试完成
2.3、flume+kafka扇入测试(扇入源为:netcat+kafka;输出以Flume的Logger类型输出)
结构图:
http://blog.itpub.net/attachment/201803/31/31511218_1522502735ZdoO.png
2.3.1、配置flume的agent信息(在192.168.137.130服务中):
[*]cd /app/apache-flume-1.6.0-cdh5.7.0-bin
[*]vi netcatOrKafka-memory-logger.conf
[*] a1.sources = r1 r2
[*] a1.channels = c1
[*] a1.sinks = k1
[*]
[*] a1.sources.r2.type = netcat
[*] a1.sources.r2.bind = 0.0.0.0
[*] a1.sources.r2.port = 44444
[*]
[*] a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
[*] a1.sources.r1.zookeeperConnect = 192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka
[*] a1.sources.r1.groupId = testGroup
[*] a1.sources.r1.topic = test
[*] a1.sources.r1.kafka.consumer.timeout.ms = 100
[*] #a1.sources.r1.zookeeper.session.timeout.ms=400
[*] #a1.sources.r1.zookeeper.sync.time.ms=200
[*] #a1.sources.r1.auto.commit.interval.ms=1000
[*] #a1.sources.r1.custom.topic.name=test
[*] #a1.sources.r1.custom.thread.per.consumer=4
[*]
[*] a1.channels.c1.type = memory
[*] a1.channels.c1.capacity = 1000
[*] a1.channels.c1.transactionCapacity = 100
[*]
[*] a1.sinks.k1.type = logger
[*]
[*] a1.sources.r1.channels = c1
[*] a1.sources.r2.channels = c1
[*] a1.sinks.k1.channel = c1
2.3.2、启动各测试命令:
A、启动flume的agent(于192.168.137.130):
flume-ng agent --name a1 --conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/netcatOrKafka-memory-logger.conf \
-Dflume.root.logger=INFO,console
B、启动kafka发送者(于192.168.137.132):
kafka_2.11-0.10.0.1]$ kafka-console-producer.sh \
--broker-list 192.168.137.132:9082,192.168.137.133:9082,192.168.137.134:9082 \
--topic test
C、测试发送(于192.168.137.130与于192.168.137.132)
telnet发送结果
http://blog.itpub.net/attachment/201803/31/31511218_1522466666KgSj.png
kafka发送结果
http://blog.itpub.net/attachment/201803/31/31511218_1522466683fZAy.png
最终logger接收结果
http://blog.itpub.net/attachment/201803/31/31511218_1522466710g0t2.png
至此flume+kafka扇入测试(扇入源为:netcat+kafka;输出以Flume的Logger类型输出)测试与验证完成。
2.4、flume+kafka扇出--复制流测试(扇入源为:netcat;输出为:kafka+Flume的Logger)
结构图:
http://blog.itpub.net/attachment/201803/31/31511218_1522502771JojH.png
2.4.1、配置flume的agent信息(在192.168.137.130服务中):
[*]cd /app/apache-flume-1.6.0-cdh5.7.0-bin
[*]vi netcatOrKafka-memory-logger.conf
[*] netcatagent.sources = netcat_sources
[*] netcatagent.channels = c1 c2
[*] netcatagent.sinks = logger_sinks kafka_sinks
[*]
[*] netcatagent.sources.netcat_sources.type = netcat
[*] netcatagent.sources.netcat_sources.bind = 0.0.0.0
[*] netcatagent.sources.netcat_sources.port = 44444
[*]
[*] netcatagent.channels.c1.type = memory
[*] netcatagent.channels.c1.capacity = 1000
[*] netcatagent.channels.c1.transactionCapacity = 100
[*]
[*] netcatagent.channels.c2.type = memory
[*] netcatagent.channels.c2.capacity = 1000
[*] netcatagent.channels.c2.transactionCapacity = 100
[*]
[*] netcatagent.sinks.logger_sinks.type = logger
[*]
[*] netcatagent.sinks.kafka_sinks.type = org.apache.flume.sink.kafka.KafkaSink
[*] netcatagent.sinks.kafka_sinks.topic = test
[*] netcatagent.sinks.kafka_sinks.brokerList = 192.168.137.132:9082,192.168.137.133:9082,192.168.137.134:9082
[*] netcatagent.sinks.kafka_sinks.requiredAcks = 0
[*] ##netcatagent.sinks.kafka_sinks.batchSize = 20
[*] netcatagent.sinks.kafka_sinks.producer.type=sync
[*] netcatagent.sinks.kafka_sinks.custom.encoding=UTF-8
[*] netcatagent.sinks.kafka_sinks.partition.key=0
[*] netcatagent.sinks.kafka_sinks.serializer.class=kafka.serializer.StringEncoder
[*] netcatagent.sinks.kafka_sinks.partitioner.class=org.apache.flume.plugins.SinglePartition
[*] netcatagent.sinks.kafka_sinks.max.message.size=1000000
[*]
[*] netcatagent.sources.netcat_sources.selector.type = replicating
[*]
[*] netcatagent.sources.netcat_sources.channels = c1 c2
[*] netcatagent.sinks.logger_sinks.channel = c1
[*] netcatagent.sinks.kafka_sinks.channel = c2
2.4.2、启动各测试命令:
A、启动flume的agent(于192.168.137.130):
flume-ng agent --name netcatagent \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/netcat-memory-loggerToKafka.conf \
-Dflume.root.logger=INFO,console
B、启动kafka消费者(于192.168.137.132):
kafka-console-consumer.sh \
--zookeeper 192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka \
--from-beginning --topic test
C、测试发送(于192.168.137.130与于192.168.137.132)
telnet发送结果
http://blog.itpub.net/attachment/201803/31/31511218_152246686968zF.png
kafka消费结果
http://blog.itpub.net/attachment/201803/31/31511218_1522466886ZIBr.png
最终logger接收结果
http://blog.itpub.net/attachment/201803/31/31511218_1522466901e6eI.png
至此flume+kafka扇出--复制流测试(扇入源为:netcat;输出为:kafka+Flume的Logger)测试与验证完成。
2.5、flume+kafka扇出--复用流测试(扇入源为:netcat;输出为:kafka+Flume的Logger)
暂无,后续补充
四、部署安装及验证过程中出现的问题
1、做flume+kafka扇入测试(扇入源为:netcat+kafka;输出以Flume的Logger类型输出)时,一直未收到kafka数据
主要原因是在做kafka的配置时在配置文件(server.properties)中写成内容:
zookeeper.connect=192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181
但在创建topics时,使用的是:
kafka-topics.sh --create \
--zookeeper 192.168.137.132:2181,192.168.137.133:2181,192.168.137.134:2181/kafka \
--replication-factor 3 --partitions 3 --topic test
其中在kafka的配置文件中zookeeper配置未加/kakfa,但在创建topics的时增加了/kafka
最终使用:
kafka-console-producer.sh \
--broker-list 192.168.137.132:9092,192.168.137.133:9092,192.168.137.134:9092 \
--topic test
命令检查没有topics信息才发现此问题
解决办法:将两个信息同步即可
2、做flume+kafka扇入测试(扇入源为:netcat+kafka;输出以Flume的Logger类型输出)时,启动flume的agent时报错。
2018-03-31 10:43:31,241 (conf-file-poller-0) Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load source type: org.apache.flume.source.kafka,KafkaSource, class: org.apache.flume.source.kafka,KafkaSource
at org.apache.flume.source.DefaultSourceFactory.getClass(DefaultSourceFactory.java:69)
at org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:42)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:322)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:97)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.flume.source.kafka,KafkaSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.DefaultSourceFactory.getClass(DefaultSourceFactory.java:67)
... 11 more
解决办法:官网资料存在问题,org.apache.flume.source.kafka,KafkaSource其中不应该包括逗号,改为:org.apache.flume.source.kafka.KafkaSource即可。详细官网
http://blog.itpub.net/attachment/201803/31/31511218_1522467050WWSR.png
页:
[1]