分享

RegionServer OOM错误

jttsai 发表于 2014-12-25 12:44:44 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 3 20943
如下所示:2014-12-25 02:55:50,897 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-12-25 02:55:51,111 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor
2014-12-25 02:55:51,400 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2014-12-25 02:55:51,400 INFO org.apache.hadoop.hdfs.DFSClient$LocalBlockReader: Local skip: 0
2014-12-25 02:55:51,404 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor
2014-12-25 02:56:03,198 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom added to HFile (hdfs://10.70.235.107:9001/hbaseda
ta/dr_query20141224/bff41828a26432eddc8a3fde11afdc1b/.tmp/1847035955550320869): 306.7k, 156513/262123 (60%)
2014-12-25 02:56:03,200 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor
2014-12-25 02:56:03,238 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor
2014-12-25 02:56:03,359 INFO org.apache.hadoop.hbase.regionserver.HRegion: aborted compaction on region dr_query20141224,f02,1419447
225999.bff41828a26432eddc8a3fde11afdc1b. after 2mins, 5sec
2014-12-25 02:56:03,433 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server serverName=pc-zjddxd05,6002
0,1415807037278, load=(requests=0, regions=2493, usedHeap=8134, maxHeap=8154): Uncaught exception in service thread regionserver6002
0.compactor
java.lang.OutOfMemoryError: Java heap space
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1093)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
        at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1276)
        at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.next(HalfStoreFileReader.java:119)
        at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:87)
        at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:82)
        at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:262)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:980)
        at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:779)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:776)
        at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:721)
        at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:81)
2014-12-25 02:56:03,434 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: requests=0, regions=2493, stores=2
493, storefiles=2264, storefileIndexSize=4153, memstoreSize=0, compactionQueueSize=1, flushQueueSize=0, usedHeap=8143, maxHeap=8154,
blockCacheSize=3512722888, blockCacheFree=762616376, blockCacheCount=46831, blockCacheHitCount=8744679, blockCacheMissCount=3231775
0, blockCacheEvictedCount=865409, blockCacheHitRatio=21, blockCacheHitCachingRatio=90
2014-12-25 02:56:03,434 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Uncaught exception in service thread regio
nserver60020.compactor
2014-12-25 02:56:05,127 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2014-12-25 02:56:05,134 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020: exiting
2014-12-25 02:56:05,134 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020: exiting
2014-12-25 02:56:05,134 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020: exiting
2014-12-25 02:56:05,134 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020: exiting
2014-12-25 02:56:05,134 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020: exiting


请问到底是什么原因导致这样的错误(compact为什么会导致oom),应该如何解决?

已有(3)人评论

跳转到指定楼层
bioger_hit 发表于 2014-12-25 15:27:37

增加 heap size试试:  

  1. /etc/hbase/conf/hbase-env.sh
复制代码



  1. # The maximum amount of heap to use, in MB. Default is 1000.
  2. export HBASE_HEAPSIZE=8000
复制代码




回复

使用道具 举报

jttsai 发表于 2014-12-26 10:28:55
bioger_hit 发表于 2014-12-25 15:27
增加 heap size试试:

修改这个参数我也想到了,请问OOM的原因是什么?
回复

使用道具 举报

kanwei163 发表于 2014-12-26 12:12:38
oom会不会是因为memstore的问题?写数据太快了,还没有将memstore的数据写到磁盘中,新来的写入文件就包memstore撑爆了?试试,设置一个比例那个,当memstore中的文件达到一个比例,就flush到磁盘,猜测的...
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条