分享

求教flume1.4.0监控不同类型日志问题

shadowhtx 发表于 2014-7-10 18:35:27 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 17 20857
shadowhtx 发表于 2014-7-14 09:20:05
hyj 发表于 2014-7-11 23:56
恩,你可以在看看,如果真没有的话,你只能二次开发了。

给你推荐个资料:

谢谢,周末没有上网回复晚了不好意思。
回复

使用道具 举报

小小布衣 发表于 2014-8-21 12:42:38
9RRR to /data/flume/event_log/win_notice_201408211019RRR.COMPLETED
2014-08-21 12:31:02,408 (pool-5-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:308)] Preparing to move file /data/flume/event_log/win_notice_201408211036RRR to /data/flume/event_log/win_notice_201408211036RRR.COMPLETED
2014-08-21 12:31:32,490 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1175)] Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
2014-08-21 12:31:32,493 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1091)] Abandoning BP-1598070838-10.7.3.83-1408589886538:blk_1073742530_1706
2014-08-21 12:31:32,838 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1094)] Excluding datanode 10.7.7.75:50010
2014-08-21 12:32:36,182 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1175)] Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
2014-08-21 12:32:36,183 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1091)] Abandoning BP-1598070838-10.7.3.83-1408589886538:blk_1073742531_1707
2014-08-21 12:32:36,443 (Thread-5) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1094)] Excluding datanode 10.7.3.83:50010
2014-08-21 12:32:36,709 (Thread-5) [WARN - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:613)] DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /flume/140821/test/events-.1408595426796.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
2014-08-21 12:32:36,709 (hdfs-k1-call-runner-1) [WARN - org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1735)] Error while syncing
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /flume/140821/test/events-.1408595426796.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
回复

使用道具 举报

小小布衣 发表于 2014-8-21 12:44:29
楼主,我发的错误日志有点多,,能帮我看看这个是什么原因造成的,,是flume的配置住问题了,,还是hdfs的问题
回复

使用道具 举报

howtodown 发表于 2014-8-21 13:32:42
小小布衣 发表于 2014-8-21 12:44
楼主,我发的错误日志有点多,,能帮我看看这个是什么原因造成的,,是flume的配置住问题了,,还是hdfs的 ...
一个个排查
你的datanode挂了两个
回复

使用道具 举报

小小布衣 发表于 2014-8-22 09:12:58
howtodown 发表于 2014-8-21 13:32
一个个排查
你的datanode挂了两个

因为我就两个datanode,,这个采集机器是在国外,所以就一直超时,导致连接不上吧,这个国外的机器只开了两个端口:9000(namenode),50010(datanode),,,但是我用国内的一个采集机器部署的话,数据是ok的,没有报错,因为国内的使用的是内网,端口都是全开,,,,我现在不知道是不是端口受限的问题,还是其他什么原因,今天还要继续搞


回复

使用道具 举报

小小布衣 发表于 2014-8-25 17:19:16
小小布衣 发表于 2014-8-22 09:12
因为我就两个datanode,,这个采集机器是在国外,所以就一直超时,导致连接不上吧,这个国外的机器只开了两 ...

请问,如果排除datenode是坏的情况下,还是有什么原因会造成这种问题呢,,(Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /flume/14082517/test-.1408957300185.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation)

我的问题还是没有解决啊。

回复

使用道具 举报

shadowhtx 发表于 2014-9-3 18:19:28
小小布衣 发表于 2014-8-25 17:19
请问,如果排除datenode是坏的情况下,还是有什么原因会造成这种问题呢,,(Caused by: org.apache.hado ...

不好意思啊,前几天一直登录不上论坛。采集器服务器在国外,集群服务器在国内。是这个情况吧。先检查一下两边互相的通信问题,我感觉应该需要一个跳转的服务器支撑吧。
回复

使用道具 举报

12
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条