分享

两个Node HBASE,Regionserver自动退出,总提示outofmemory问题

carol 发表于 2015-7-6 17:44:09 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 1 42412
只有两个NODE,开启HA的HDFS,在其中一台上装了Zookeeper和Journalnode,想在这两台机器上装HBASE0.98,两台机器的内存都是3G。
cdh4@master conf]$ free -m
             total       used       free     shared    buffers     cached
Mem:          2879       1149       1730          0         22        123
-/+ buffers/cache:       1004       1875
Swap:         3967        146       382
[cdh4@master2 conf]$ free -m
             total       used       free     shared    buffers     cached
Mem:          2879       1132       1747          0         20        143
-/+ buffers/cache:        968       1911
Swap:         3999          0       3999
hbase-env.sh:
# The maximum amount of heap to use, in MB. Default is 1000.
export HBASE_HEAPSIZE=256
export HBASE_MASTER_OPTS="$HBASE_JMX_BASE -Xms256m -Xmx256m -Dcom.sun.management.jmxremote.port=10121"
export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_BASE -Xms768m -Xmx768m -Xmn128m -Dcom.sun.management.jmxremote.port=10122"

启动regionserver,过一会自动退出,log里总是报Caused by: java.lang.OutOfMemoryError: Direct buffer memory:
2015-07-05 23:56:03,477 INFO  [main] util.ServerCommandLine: env:HADOOP_MAPRED_HOME=/home/cdh4/App/hadoop-2.5.0-cdh5.2.1-och4.0.0
2015-07-05 23:56:03,477 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=23.21-b01
2015-07-05 23:56:03,477 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx128m, -verbose:gc, -XX:+PrintGCDetails, -Xloggc:/home/ocnosql/app/hbase-0.98.1-hadoop2/logs/gc-hbase.log, -XX:+PrintGCDateStamps, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=70, -XX:SurvivorRatio=65535, -Dcom.sun.management.jmxremote.ssl=false, -Dcom.sun.management.jmxremote.authenticate=false, -Xms256m, -Xmx512m, -Xmn128m, -XX:MaxDirectMemorySize=128m, -Dcom.sun.management.jmxremote.port=10122, -Xdebug, -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071, -Dhbase.log.dir=/home/cdh4/App/hbase-0.98.6.1-och4.0.1/logs, -Dhbase.log.file=hbase-cdh4-regionserver-master2.log, -Dhbase.home.dir=/home/cdh4/App/hbase-0.98.6.1-och4.0.1, -Dhbase.id.str=cdh4, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/home/cdh4/App/hadoop-2.5.0-cdh5.2.1-och4.0.0/lib/native:/home/cdh4/App/hbase-0.98.6.1-och4.0.1/lib/native/Linux-amd64-64, -Dhbase.security.logger=INFO,RFAS]
2015-07-05 23:56:03,760 DEBUG [main] regionserver.HRegionServer: regionserver/master2/192.168.25.129:60020 HConnection server-to-server retries=350
2015-07-05 23:56:04,005 INFO  [main] ipc.SimpleRpcScheduler: Using default user call queue, count=15
2015-07-05 23:56:04,032 INFO  [main] ipc.RpcServer: regionserver/master2/192.168.25.129:60020: started 10 reader(s).
2015-07-05 23:56:04,130 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2015-07-05 23:56:04,183 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-05 23:56:04,183 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2015-07-05 23:56:04,391 INFO  [main] util.ByteBufferArray: Allocating buffers total=13.50 GB , sizePerBuffer=4 MB, count=3456
2015-07-05 23:56:04,642 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2474)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2489)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2472)
        ... 5 more
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:658)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
yarn-daemon.sh start proxyserver

已有(1)人评论

跳转到指定楼层
NEOGX 发表于 2015-7-6 18:15:36

试试下面
1、编辑 /etc/security/limits.conf,添加
[mw_shl_code=bash,true]hadoop hard 32000
hadoop soft 32000[/mw_shl_code]

2、编辑/etc/pam.d/login,添加
[mw_shl_code=bash,true]session    required    pam_limits.so[/mw_shl_code]

3、重新登录后,使用ulimit -u命令进行查看
4、重新启动hdfs和hbase

来源:

hbase插入数据,出现java.lang.OutOfMemoryError


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条